Skip to main content

个人工具合集

Project description

10分钟快速开始

前言

一些常用的工具函数,主要为客车整车试验数据处理相关内容。本文仅介绍了部分常用函数,data模块中还有许多函数没有涉及,详细用法请参考源码中的函数说明。文中出现的所有演示素材都可以在库安装路径下的examples文件夹中找到。

数据处理基本操作

读取表格文件

目前只封装了xlsx、xls和csv格式的文件读取函数,其他pandas支持的文件格式可以直接使用pandas内置的函数读取:

import xyw_utils as xy

processor = xy.data.FileProcessorFacade()
df = processor.read('CCBC.csv')

>>> df
       Time [s]  uDCDCIn  Unnamed: 2  Time [s].1  ...  Time [s].11  rBattSOC  Unnamed: 35  Unnamed: 36
0       0.01149    617.0         NaN     0.01207  ...      0.09046      97.6          NaN          NaN
1       4.01241    617.0         NaN     4.01299  ...      0.19049      97.6          NaN          NaN
2       5.01263    617.0         NaN     5.01320  ...      0.29054      97.6          NaN          NaN
3       6.01286    617.0         NaN     6.01344  ...      3.49059      97.6          NaN          NaN
4       7.01309    617.0         NaN     7.01367  ...      3.59036      97.6          NaN          NaN
...         ...      ...         ...         ...  ...          ...       ...          ...          ...
13265       NaN      NaN         NaN         NaN  ...   1329.64510      96.4          NaN          NaN
13266       NaN      NaN         NaN         NaN  ...   1329.74509      96.4          NaN          NaN
13267       NaN      NaN         NaN         NaN  ...   1329.84511      96.4          NaN          NaN
13268       NaN      NaN         NaN         NaN  ...   1329.94501      96.4          NaN          NaN
13269       NaN      NaN         NaN         NaN  ...          NaN       NaN          NaN          NaN

[13270 rows x 37 columns]

对于只有单一时间序列的数据,使用上述命令就可以直接生成以表格中第一行为列名的pandas.DataFrame数据,不过对于每一信号都有单独时间序列的数据则需要继续进行如下操作:

dfs = xy.data.init_split(df)

>>> dfs
[        Time [s]  uDCDCIn
0        0.01149    617.0
1        4.01241    617.0
2        5.01263    617.0
3        6.01286    617.0
4        7.01309    617.0
...          ...      ...
1322  1325.31101    617.0
1323  1326.31125    617.0
1324  1327.31149    617.0
1325  1328.31172    617.0
1326  1329.31193    617.0

[1327 rows x 2 columns],         Time [s]  iDCDCIn
0        0.01207      2.0
1        4.01299      3.0
2        5.01320      2.0
3        6.01344      3.0
4        7.01367      2.0
...          ...      ...
1322  1325.31158      3.0
1323  1326.31183      2.0
1324  1327.31207      2.0
1325  1328.31229      2.0
1326  1329.31251      2.0

[1327 rows x 2 columns],          Time [s]  iMCU
0         0.02674   2.0
1         0.12668   2.0
2         0.22669   2.0
3         3.52688   2.0
4         3.62686   2.0
...           ...   ...
13263  1329.59782   3.0
13264  1329.69795   3.0
13265  1329.79783   3.0
13266  1329.89784   3.0
13267  1329.99790   3.0

[13268 rows x 2 columns],          Time [s]   uMCU
0         0.02674  617.0
1         0.12668  617.0
2         0.22669  617.0
3         3.52688  617.0
4         3.62686  617.0
...           ...    ...
13263  1329.59782  617.0
13264  1329.69795  617.0
13265  1329.79783  617.0
13266  1329.89784  617.0
13267  1329.99790  617.0

[13268 rows x 2 columns],          Time [s]  vVehSpdAbs
0         0.02730         0.0
1         0.12725         0.0
2         0.22725         0.0
3         3.52744         0.0
4         3.62742         0.0
...           ...         ...
13263  1329.59837         0.0
13264  1329.69851         0.0
13265  1329.79839         0.0
13266  1329.89839         0.0
13267  1329.99846         0.0

[13268 rows x 2 columns],          Time [s]  iSPCOut
0         0.05502        1
1         0.15502        0
2         0.25507        1
3         3.45522        1
4         3.55571        1
...           ...      ...
13265  1329.56053        1
13266  1329.66052        1
13267  1329.76050        1
13268  1329.86048        1
13269  1329.96046        1

[13270 rows x 2 columns],          Time [s]  uSPCOut
0         0.05502      222
1         0.15502      222
2         0.25507      230
3         3.45522      222
4         3.55571      222
...           ...      ...
13265  1329.56053      223
13266  1329.66052      223
13267  1329.76050      223
13268  1329.86048      222
13269  1329.96046      222

[13270 rows x 2 columns],          Time [s]  iAPCOut
0         0.07561        2
1         0.17561        2
2         0.27562        2
3         3.47580        0
4         3.57581        0
...           ...      ...
13265  1329.60061        0
13266  1329.70075        0
13267  1329.80063        0
13268  1329.90063        0
13269  1330.00179        0

[13270 rows x 2 columns],          Time [s]  uAPCOut
0         0.07561      327
1         0.17561      327
2         0.27562      326
3         3.47580        0
4         3.57581        0
...           ...      ...
13265  1329.60061        0
13266  1329.70075        0
13267  1329.80063        0
13268  1329.90063        0
13269  1330.00179        0

[13270 rows x 2 columns],          Time [s]  iBattCurr
0         0.09046        3.6
1         0.19049        3.8
2         0.29054        3.4
3         3.49059        1.2
4         3.59036        1.2
...           ...        ...
13264  1329.54537        1.2
13265  1329.64510        1.2
13266  1329.74509        1.2
13267  1329.84511        1.2
13268  1329.94501        1.2

[13269 rows x 2 columns],          Time [s]  uBattVltg
0         0.09046      617.7
1         0.19049      617.7
2         0.29054      617.7
3         3.49059      617.9
4         3.59036      617.9
...           ...        ...
13264  1329.54537      617.9
13265  1329.64510      617.9
13266  1329.74509      617.9
13267  1329.84511      617.9
13268  1329.94501      617.9

[13269 rows x 2 columns],          Time [s]  rBattSOC
0         0.09046      97.6
1         0.19049      97.6
2         0.29054      97.6
3         3.49059      97.6
4         3.59036      97.6
...           ...       ...
13264  1329.54537      96.4
13265  1329.64510      96.4
13266  1329.74509      96.4
13267  1329.84511      96.4
13268  1329.94501      96.4

[13269 rows x 2 columns]]
df = xy.data.merge_all(dfs)

>>> df
         Time [s]  uDCDCIn  iDCDCIn  iMCU   uMCU  ...  iAPCOut  uAPCOut  iBattCurr  uBattVltg  rBattSOC
0         0.01149    617.0      NaN   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
1         0.01207      NaN      2.0   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
2         0.02674      NaN      NaN   2.0  617.0  ...      NaN      NaN        NaN        NaN       NaN
3         0.02730      NaN      NaN   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
4         0.05502      NaN      NaN   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
...           ...      ...      ...   ...    ...  ...      ...      ...        ...        ...       ...
68994  1329.94501      NaN      NaN   NaN    NaN  ...      NaN      NaN        1.2      617.9      96.4
68995  1329.96046      NaN      NaN   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
68996  1329.99790      NaN      NaN   3.0  617.0  ...      NaN      NaN        NaN        NaN       NaN
68997  1329.99846      NaN      NaN   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
68998  1330.00179      NaN      NaN   NaN    NaN  ...      0.0      0.0        NaN        NaN       NaN

[68999 rows x 13 columns]

处理到这一步时,已经将各时间序列的数据统一到了同一个总的时间轴上,不过每一个信号列中都会有许多NaN的无效数据,可以使用下面的方法进行插值:

df = xy.data.interpolate_nan(df, drop=False)

>>> df
         Time [s]  uDCDCIn   iDCDCIn  iMCU   uMCU  ...  iAPCOut  uAPCOut  iBattCurr  uBattVltg  rBattSOC
0         0.01149    617.0       NaN   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
1         0.01207    617.0  2.000000   NaN    NaN  ...      NaN      NaN        NaN        NaN       NaN
2         0.02674    617.0  2.003667   2.0  617.0  ...      NaN      NaN        NaN        NaN       NaN
3         0.02730    617.0  2.003807   2.0  617.0  ...      NaN      NaN        NaN        NaN       NaN
4         0.05502    617.0  2.010735   2.0  617.0  ...      NaN      NaN        NaN        NaN       NaN
...           ...      ...       ...   ...    ...  ...      ...      ...        ...        ...       ...
68994  1329.94501      NaN       NaN   3.0  617.0  ...      0.0      0.0        1.2      617.9      96.4
68995  1329.96046      NaN       NaN   3.0  617.0  ...      0.0      0.0        NaN        NaN       NaN
68996  1329.99790      NaN       NaN   3.0  617.0  ...      0.0      0.0        NaN        NaN       NaN
68997  1329.99846      NaN       NaN   NaN    NaN  ...      0.0      0.0        NaN        NaN       NaN
68998  1330.00179      NaN       NaN   NaN    NaN  ...      0.0      0.0        NaN        NaN       NaN

[68999 rows x 13 columns]

注意:当把drop参数设为False时,首尾无法插值的部分数据依旧会保持为NaN。当确认不需要这部分数据时,可以直接将drop设为True,舍去该部分数据;也可以使用xy.data.remove_nan()方法手动删去所有带有NaN数据的行:

df = xy.data.remove_nan(df)

>>> df
         Time [s]  uDCDCIn   iDCDCIn  iMCU   uMCU  ...  iAPCOut  uAPCOut  iBattCurr  uBattVltg  rBattSOC
0         0.09046    617.0  2.019593   2.0  617.0  ...      2.0    327.0   3.600000      617.7      97.6
1         0.12668    617.0  2.028646   2.0  617.0  ...      2.0    327.0   3.672418      617.7      97.6
2         0.12725    617.0  2.028788   2.0  617.0  ...      2.0    327.0   3.673558      617.7      97.6
3         0.15502    617.0  2.035729   2.0  617.0  ...      2.0    327.0   3.729081      617.7      97.6
4         0.17561    617.0  2.040876   2.0  617.0  ...      2.0    327.0   3.770249      617.7      97.6
...           ...      ...       ...   ...    ...  ...      ...      ...        ...        ...       ...
68952  1329.26058    617.0  2.000000   3.0  617.0  ...      0.0      0.0   1.200000      617.9      96.4
68953  1329.29780    617.0  2.000000   3.0  617.0  ...      0.0      0.0   1.200000      617.9      96.4
68954  1329.29836    617.0  2.000000   3.0  617.0  ...      0.0      0.0   1.200000      617.9      96.4
68955  1329.30060    617.0  2.000000   3.0  617.0  ...      0.0      0.0   1.200000      617.9      96.4
68956  1329.31193    617.0  2.000000   3.0  617.0  ...      0.0      0.0   1.200000      617.9      96.4

[68957 rows x 13 columns]

保存数据

保存的格式与读取时的限制相同:

processor.write(df, 're.xlsx')

积分

封装了两种常用的离散数据积分方法,辛普森法和复杂梯形法,这里以积分电池能耗为例:

df['pb'] = df['iBattCurr'] * df['uBattVltg']
df['wb'] = xy.data.get_integral(df[['Time [s]', 'pb']], method='trapz', last_only=False) / 3600000

>>> df
         Time [s]  uDCDCIn   iDCDCIn  iMCU   uMCU  ...  iBattCurr  uBattVltg  rBattSOC           pb        wb
0         0.09046    617.0  2.019593   2.0  617.0  ...   3.600000      617.7      97.6  2223.720000  0.000000
1         0.12668    617.0  2.028646   2.0  617.0  ...   3.672418      617.7      97.6  2268.452768  0.000023
2         0.12725    617.0  2.028788   2.0  617.0  ...   3.673558      617.7      97.6  2269.156735  0.000023
3         0.15502    617.0  2.035729   2.0  617.0  ...   3.729081      617.7      97.6  2303.453504  0.000041
4         0.17561    617.0  2.040876   2.0  617.0  ...   3.770249      617.7      97.6  2328.882761  0.000054
...           ...      ...       ...   ...    ...  ...        ...        ...       ...          ...       ...
68952  1329.26058    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526091
68953  1329.29780    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526098
68954  1329.29836    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526099
68955  1329.30060    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526099
68956  1329.31193    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526101

[68957 rows x 15 columns]

注意:method参数只支持‘simps’以及‘trapz’两种,last_only设为True时,只返回最终积分结果,可以大大减少运算时间,此例中即2.526101。对于求电量积分这一操作,也可以使用xy.vehicle_test.energy_consumption_test()这一专用函数:

df['wb'] = xy.vehicle_test.energy_consumption_test(df, 1, 'Time [s]', 'iBattCurr', 'uBattVltg', last_only=False)

>>> df
         Time [s]  uDCDCIn   iDCDCIn  iMCU   uMCU  ...  iBattCurr  uBattVltg  rBattSOC           pb        wb
0         0.09046    617.0  2.019593   2.0  617.0  ...   3.600000      617.7      97.6  2223.720000  0.000000
1         0.12668    617.0  2.028646   2.0  617.0  ...   3.672418      617.7      97.6  2268.452768  0.000023
2         0.12725    617.0  2.028788   2.0  617.0  ...   3.673558      617.7      97.6  2269.156735  0.000023
3         0.15502    617.0  2.035729   2.0  617.0  ...   3.729081      617.7      97.6  2303.453504  0.000041
4         0.17561    617.0  2.040876   2.0  617.0  ...   3.770249      617.7      97.6  2328.882761  0.000054
...           ...      ...       ...   ...    ...  ...        ...        ...       ...          ...       ...
68952  1329.26058    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526091
68953  1329.29780    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526098
68954  1329.29836    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526099
68955  1329.30060    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526099
68956  1329.31193    617.0  2.000000   3.0  617.0  ...   1.200000      617.9      96.4   741.480000  2.526101

[68957 rows x 15 columns]

整车试验常用操作

行驶阻力系数

根据车辆总质量和车型,按照GB/T 18386-2017附录A中表格估算车辆行驶阻力系数:

f = xy.vehicle_test.driving_resistance(13500, 'coach')

>>>print(f)
       2
0.154 x + 5.95 x + 751.3

>>>print(type(f))
<class 'numpy.poly1d'>

>>>print(f.c)
[1.5400e-01 5.9500e+00 7.5135e+02]

>>>print(f(60))
1662.75

稳态回转

稳态回转试验数据处理,主要参考:《GB/T 6323-2014 汽车操纵稳定性试验方法》10.4:

# 读取数据,第三个工作表中
df = processor.read('steady_static_circular_test.xlsx', 2)
# 移除数据中的无效值行
df = xy.data.remove_nan(df)
# 进行稳态回转的数据处理,df、轴距、各数据所在列名称
res = xy.vehicle_test.steady_static_circular_test(
    df, 4.050, 'Time [s]', 'AccY_Comp [m/s/s]',
    'V_X [km/h]', 'YAWRATE [deg/s]', 'Roll [deg]'
)

# 打印结果字典
for key, value in res.items():
    print('{}:{}'.format(key, value))
>>>df:          t       a_y        v_x  ...          r   ratio_r  d_sideslip_angle
0      3.76 -0.538704   8.191092  ...  13.898634  1.010649          0.177798
1      3.77 -0.517155   8.227018  ...  14.021179  1.019560          0.323719
2      3.78 -0.479446   8.262943  ...  13.699623  0.996178         -0.064736
3      3.79 -0.479446   8.298869  ...  13.858317  1.007718          0.129226
4      3.80 -0.506381   8.334795  ...  13.898283  1.010624          0.177376
     ...       ...        ...  ...        ...       ...               ...
1222  16.00 -4.374273  29.818448  ...  15.399734  1.119803          1.805226
1223  16.01 -4.530497  29.818448  ...  15.338217  1.115330          1.744792
1224  16.02 -4.568206  29.854374  ...  15.201510  1.105389          1.608740
1225  16.03 -4.315016  29.854374  ...  15.288837  1.111739          1.695929
1226  16.04 -4.234210  29.854374  ...  15.370341  1.117666          1.776411
[1227 rows x 9 columns]
charts:(<pygal.graph.xy.XY object at 0x000001B1736D8C10>, <pygal.graph.xy.XY object at 0x000001B1736D8BB0>, <pygal.graph.xy.XY object at 0x000001B172F35700>)
curves:(poly1d([ 3.88131566e-04,  2.98285198e-03, -1.73719579e-02,  1.00000000e+00]), poly1d([ 0.00586082,  0.03784224, -0.29914155,  0.        ]), poly1d([-0.01398442, -0.87342555, -0.38406029]))
results:('left', 13.752182739265344, array([ 6.80479099, -2.50024479]), 0.35138275155465365, 0.8454567089306112)

# 打印试验结果
results = '方向:{}\n初始圆半径:{} m;\n中性转向点侧向加速度:{} m/s^2;\n' \
          '不足转向度:{} (°)/(m/s^2);\n车身侧倾度:{} (°)/(m/s^2)。'.format(*res['results'])
>>>print(results)
方向left
初始圆半径13.752182739265344 m
中性转向点侧向加速度[ 6.80479099 -2.50024479] m/s^2
不足转向度0.35138275155465365 (°)/(m/s^2)
车身侧倾度0.8454567089306112 (°)/(m/s^2)

# 绘制试验结果曲线,res['charts']中依次为转向半径比特性图、前后轴侧偏角特性图、车厢侧倾角特性图
res['charts'][0].render_in_browser()  # 渲染到默认浏览器
res['charts'][0].render_to_png('re.png')  # 保存为png图片
res['charts'][0].render_to_file('re.svg')  # 保存为svg图片

注意:保存为png格式的图片需要额外的程序支持,windows平台请下载安装gtk2-runtime-2.24.10-2012-10-10-ash.exe。

低速转向回正

低速转向回正试验数据处理,主要参考:《GB/T 6323-2014 汽车操纵稳定性试验方法》8.4:

# 读取数据,第三个工作表中
df = processor.read('returnability_test.xlsx', 2)
# 移除数据中的无效值行
df = xy.data.remove_nan(df)
# 进行低速转向回正的数据处理,df、松方向盘时刻、各数据所在列名称
res = xy.vehicle_test.returnability_test(df, 143.3, 'Time [s]', 'YAWRATE [deg/s]', 'ANGLE [deg]')

# 打印结果字典,图片分别为横摆角速度时域曲线、转向盘转角时域曲线,
# 结果分别为残余横摆角速度(正负表示方向)、横摆角速度总方差
for key, value in res.items():
    print('{}:{}'.format(key, value))
>>>charts:(<pygal.graph.xy.XY object at 0x000001B1737AAF70>, <pygal.graph.xy.XY object at 0x000001B173756430>)
results:(-2.211031768936606, 0.8667891047150287)

转向轻便性

转向轻便性试验数据处理,主要参考:《GB/T 6323-2014 汽车操纵稳定性试验方法》9.4:

# 读取数据,第一个工作表中
df = processor.read('steering_efforts_test.xlsx', 0)
# 进行转向轻便性的数据处理,df、转向盘直径(m)、各数据所在列名称
res = xy.vehicle_test.steering_efforts_test(df, 0.48, 't', 'v_x', 'angle', 'torque')

# 打印结果字典,图片为转向盘力矩转角曲线,
# 结果分别为绕双纽线时的平均车速、左转转向盘最大转角、右转转向盘最大转角、转向盘最大作用力矩、转向盘最大作用力、绕双纽线路径一周的转向盘作用功、转向盘平均摩擦力矩、转向盘平均摩擦力
for key, value in res.items():
    print('{}:{}'.format(key, value))
>>>results:(10.922474511888389, -645.404377133912, 661.401071702456, 6.60583377038265, 27.524307376594376, 94.27873879648446, 2.06678577735684, 8.611607405653501)
chart:<pygal.graph.xy.XY object at 0x000002AB49C38520>

注意:df数据需要自行剪切所需数据段后再传入函数,演示所用文件为经过剪切预处理后保存的数据,非原始数据。

双纽线摆桩

绘制双纽线摆桩图,主要参考:《GB/T 6323-2014 汽车操纵稳定性试验方法》9.3.1:

# 计算双纽线,第一个参数车辆最小转弯半径(左右转中取大的),第二个为车辆宽度
res = xy.vehicle_test.draw_lemniscate(15.94/2, width=2.2)

# 打印结果字典,图片为双纽线摆桩示意图,
# 结果分别为16个摆桩点坐标、双纽线最小曲率半径、x轴最大值、摆桩的偏移量、双纽线y轴最高点x坐标、双纽线y轴最高点y坐标
for key, value in res.items():
    print('{}:{}'.format(key, value))
>>>chart:<pygal.graph.xy.XY object at 0x000002AB49C38AC0>
results:([(25.151000000000003, 0), (27.451, 0), (16.106007431235092, 8.148807725993692), (16.106007431235092, 10.448807725993692), (16.106007431235092, -10.448807725993692), (16.106007431235092, -8.148807725993692), (1.1500000000000001, 0), (-25.151000000000003, 0), (-27.451, 0), (-16.106007431235092, 8.148807725993692), (-16.106007431235092, 10.448807725993692), (-16.106007431235092, -10.448807725993692), (-16.106007431235092, -8.148807725993692), (-1.1500000000000001, 0), (0, -1.1500000000000001), (0, 1.1500000000000001)], 8.767000000000001, 26.301000000000002, 1.1500000000000001, 16.106007431235092, 9.298807725993692)

pdf常用操作

图片转pdf

推荐将需要转换合并的图片文件以001.jpg、002.jpg、003.jpg的形式顺序命名后放在同一文件夹下:

import glob
from xyw_utils.pdf import img_to_pdf, merge_pdfs

# 获取imgs文件夹下所有jpg后缀的文件名列表
imgs = glob.glob(r'imgs/*.jpg')
# 降序排列列表
imgs.sort()
# 将图片转为pdf并合并为一个文件
img_to_pdf(imgs, 'imgs.pdf')

拼接合并pdf文件

将多个pdf文件拼接合并到同一文件中:

# 在pdfs文件下创建五个pdf文件
for i in range(5):
    img_to_pdf('imgs/00{}.jpg'.format(i + 1), 'pdfs/{}.pdf'.format(i + 1))

# 将刚才创建的五个文件合并为一个
merge_pdfs(['pdfs/1.pdf', 'pdfs/2.pdf', 'pdfs/3.pdf', 'pdfs/4.pdf', 'pdfs/5.pdf'], 'merge.pdf')

如果需要对pdf文件进行更多操作,可以使用pikepdf库,此处只是封装了其中两个小功能。

Release Notes

0.0.10

  • vehicle_test模块新增按固定步长提取数据点的函数
  • 新增充分发出的平均制动减速度的计算函数

0.0.9

  • 修复了双纽线摆桩图生成函数中的标桩间隙错误

0.0.8

  • 添加了说明文档《10分钟快速开始》

0.0.1

  • 首次上传

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xyw-utils-0.0.10.tar.gz (2.1 MB view hashes)

Uploaded Source

Built Distribution

xyw_utils-0.0.10-py3-none-any.whl (2.1 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page