Skip to main content

The rascpy project supplement and improve existing statistical methods such as bidirectional stepwise regression,optimal binning,risk scorecard,automatic parameter search for ensemble trees,filling and transforming complex data with both missing and special values,high-dimensional stratified sampling,reject inference.An major area is credit risk Management.

Project description

rascpy

  • The rascpy project has supplemented and improved existing statistical methods in order to provide data analysts and modelers with a more accurate and convenient algorithmic framework.
  • One major application area is credit risk management. rascpy provides statistical algorithms that are more accurate and easier to use than existing libraries, such as risk scorecard, bidirectional (linear/logistic) stepwise regression, optimal binning, automatic parameter search for ensemble trees, the Impute algorithm for filling and transforming complex data with both missing and special values, high-dimensional stratified sampling, and rejection inference.
    Project evolution process:
  • Phase 1 (ScoreConflow): Provides a "business instruction + unattended" scoring card model development method. The original intention of the rascpy project is to provide more accurate algorithms and more labor-saving scoring card development tools.
  • Phase 2 (Risk Actuarial Score Card): Based on the previous phase, a risk actuarial score card is provided to build a model by comprehensively considering user default, user profit level, and data cost.
  • Phase 3 (Risk Actuarial Statistics): No longer limited to the development of scorecards. Instead, it focuses on refining the ways in which statistics and machine learning fall short of meeting practical needs in risk measurement. Based on the previous stage, it incorporates features such as automated hyperparameter tuning for XGBoost, high-dimensional stratified sampling, imputation algorithms for filling and transforming complex data containing both missing values and special values, as well as rejection inference.

resource

English Documents,Tutorials and Examples (Version = 2026.4.2)
Chinese Documents, Tutorials, and Examples (version = 2026.4.2)

Install

pip install --upgrade rascpy

Notes

  1. In principle, projects generated by any version of rascpy can be loaded and used by newer versions of rascpy. That is, it is feasible to use the latest version of rascpy's ScoreCard.CardFlow.start(load_step) to load a project generated by an older version. However, if you encounter errors after loading, you need to rerun the project with the version of rascpy you are currently using.
  2. If your interaction language changes, previous projects must be rerun before they can be loaded again.
  3. The default interaction language of rascpy Version=2025.12.16 is inconsistent with the operating system language. If later versions encounter errors when loading projects generated by Version=2025.12.16, you need to rerun the project with the later version instead of directly loading the project generated by Version=2025.12.16.
  4. Because one of the main applications of Python is data analysis, a characteristic of languages that support data analysis is the need to keep results in memory persistently, unlike programming languages primarily designed for functional execution, which clear memory upon program completion. Therefore, recommended information will be continuously output to the user's console (e.g., the Jupyter output area, Spyder's console area), which is normal.

Version = 2026.4.2 Major Updates

  1. Added an information publishing feature, allowing users to freely publish various types of information (e.g., data product promotions, industry articles and video channels, official accounts, recommendations for books on data and risk control, notifications for offline salon events, corporate recruitment needs, statistical knowledge, etc.). When using rascpy, users will randomly receive one piece of information from their country or region every 60 minutes. If users need to publish information, they can contact the author for review. As long as it does not violate laws or ethical standards, it will be published for free.
  2. When calling rascpy.ScoreCard.get_X_score, a validation check has been added: an exception will be raised if any variable in the card is not provided in X, preventing the issue of missing contributions to the score.
  3. Added the rascpy.Index.Profit metric for model evaluation. Profit evaluates the model by setting the profit per sample, data cost, and overall approval rate (or automatically calculating the optimal approval rate).
  4. A bidirectional stepwise regression algorithm with actuarial functionality. It has been upgraded from the testing phase to a commercially deployable stage.
  5. When training a model, if measure_index=Profit is specified, the training objective is to derive a model that maximizes profit under the given conditions using the existing samples.Models that support training with the Profit metric: rascpy.StepwiseRegressionSKLearn.LogisticReg. Models planned to support training with the Profit metric in the future: rascpy.Tree.auto_xgb and rascpy.TreeRej.auto_rej_xgb.
  6. You can also automate the construction of a scorecard model aimed at maximizing profitability by setting measure_index=Profit in the instruction file.
  7. In newer versions of pandas, elements of type list are forcibly converted to np.ndarray, causing isinstance(obj, list) to not work as expected. The latest code has been made compatible with this situation.

Version = 2026.1.19 Major Updates

  1. Revise ambiguous wording in the API documentation.
  2. Fixed inconsistent handling of unhandled values for categorical variables across multiple call entry points.
  3. When calling rascpy.ScoreCard.get_X_score and rascpy.ScoreCard.get_row_score, timely alerts should be issued for samples that encounter exceptions. This can prevent belated awareness of erroneous data caused by the use of err_handle.
  4. The training data contains no None values, resulting in no binning for None. However, other datasets may include None values. In such cases, a warning will be displayed during equal-frequency or optimal binning to prompt the user to handle the issue early. (Suggested solutions: include None values in the training set; set custom bins; or, if no other errors occur and the system can handle it automatically, the issue may be ignored.)

Version = 2025.12.31 Major Updates

  1. Fixed the issue where the default interactive language did not match the operating system language.
  2. Fixed an issue where scorecard rejection inference would fail under specific circumstances.
  3. Added the ability to set the number of iterations for scorecard rejection inference. Refer to rej_iter_num in the "all_instructions_detailed_desc" document.

Version = 2025.12.16 Major Updates

  1. Added a completion progress bar for equal‑frequency binning and optimal binning. It is displayed progress even when multiprocessing is used.
  2. Fixed a bug in Report.bfy_df_like_excel_one and Report.bfy_df_like_excel where an error occurred if the color scale contained non‑numeric types.
  3. Renamed the input parameters text_lum and red_max in Report.bfy_df_like_excel to default_text_lum and default_red_max, respectively.

Version = 2025.11.11 Major Updates

  1. StepwiseRegressionSKLearn has replaced Reg_Step_Wise_MP as the built-in bidirectional stepwise regression model algorithm in rascpy. StepwiseRegressionSKLearn removes a step from Reg_Step_Wise_MP, saving significant computation time. This step has a very small probability of slightly improving model performance, but consumes a significant amount of runtime. After evaluation, rascpy has decided to remove this step from StepwiseRegressionSKLearn to save computation time. Although Reg_Step_Wise_MP remains, StepwiseRegressionSKLearn is recommended.
  2. The bidirectional stepwise regression model algorithm StepwiseRegressionSKLearn complies with the interface specification of sklearn and can be used in pipeline.
  3. The operating language automatically switches between Chinese and English.
  4. Use Excel pivot tables as a template to beautify all automatically generated reports.
  5. Add support for Python 3.14

Project Introduction

Its main functions include:

  1. Provide binning algorithms that support monotonic or U-shaped constraints.
  2. In addition to user-specified constraints, rascpy can automatically identify constraints (monotonic or U-shaped) that apply to the data based on the training and validation sets. Users can export the identified results to Excel and compare them with their own understanding of the variable trends.
  3. Provides a more accurate binning algorithm, resulting in a higher IV. Mathematically, this set of nodes is proven to be the global optimal node, regardless of constraints or even for categorical variables.
  4. This binning algorithm can also handle complex data types where a feature contains multiple special values and null values at the same time.
  5. Provides Python's bidirectional stepwise regression algorithm (including logistic regression and linear regression) and supports various constraints, such as coefficient sign, p-value, and the number of group variables to be included in the model. The entire model iteration process and the reasons for each variable not being included in the model are all exported to Excel.
  6. A bidirectional stepwise regression algorithm with actuarial functionality is provided. Company profit is incorporated as part of the loss function, which simultaneously takes into account model prediction accuracy, per-user profitability, and data cost. (It has been upgraded from the testing phase to a commercially deployable stage.)
  7. Provides a more convenient missing value filling function. Common missing value filling methods can only handle missing values, but cannot handle special values, especially when the data contains both missing and special values. Special values cannot be simply equated with missing values. Simply treating special values as missing values without considering the business scenario will lead to information loss. Special values transform numeric data into a complex data type that mixes categorical and numerical data. Currently, no model can directly handle this data (although some models can produce results, they are not accurate and have no practical significance). The Impute algorithm provided by rascpy can solve this problem. The transformed data can be directly fed into any model and meet practical business requirements.
  8. Provides a high-precision, high-dimensional stratified sampling function. The stratified sampling built into machine learning is relatively simple, as it only performs stratified sampling based on the target variable. This can lead to a situation where, after discretizing certain variables into equal-frequency bins based on the same nodes, the event rate of the target differs significantly between the training set and the test set. As a result, it becomes difficult to minimize the discrepancy in metrics between the training and validation sets during modeling. Without high-precision, high-dimensional stratified sampling, the only way to address this issue is to reduce model performance in order to increase model generalization.Another phenomenon is that after binning, the binning results for the training set and the validation set show considerable differences, reflected in a substantial gap in Information Value (IV). The stratified sampling method provided by rascpy.Sampling.split_cls has been tested and shown to significantly alleviate this issue. It reduces the distribution differences between the split datasets without compromising the model performance on the training set or the IV of the binned variables, thereby narrowing the performance gap between the training and validation sets. (Excessive performance differences of a model across different datasets are often caused by inconsistencies in high-dimensional joint distributions; however, due to the limitations of sampling precision, this is often treated as overfitting.)
  9. Provides automatic hyperparameter tuning for XGBoost. Some automated tuning frameworks on the market produce models with a significant gap between training and validation metrics, especially in cases of small or imbalanced samples, where the difference can be extremely large. In contrast, the XGBoost automatic tuning framework provided by rascpy can significantly reduce the discrepancy between training and validation metrics.
  10. Support the rejection inference model of the scorecard.
  11. Support the rejection inference model of xgboost.
  12. Provide model report output function, users can easily generate Excel documents for model development reports.
  13. Provides batch beautification and export of dataframes to Excel. The output format is similar to Excel's pivot table and has color scales.
  14. Supports one-click unattended automated modeling using the above functions through the AI instruction templates provided by rascpy. However, the functions mentioned above can also be called via traditional APIs, allowing users to assemble each module through programming to build models themselves.

Introduction to main modules

1.Bins

The optimal split point calculated by Bins is a set of split points that maximizes IV with a mathematical proof. For categorical variables, including ordered and unordered categories, there is also a set of split points that can be mathematically proven to maximize IV. Its main functions are:

  1. Find the split point that maximizes IV with or without constraints. Five constraint settings are supported: monotonic (automatically determines increasing or decreasing), monotonically decreasing, monotonically increasing, U-shaped (automatically determines convex or concave), and automatically set appropriate constraints (automatically determines monotonically decreasing, monotonically increasing, convex U-shaped, and concave U-shaped).
  2. For categorical variables with or without constraints, the global optimal split point can also be found to maximize IV.
  3. Use "Minimum difference in event rates between adjacent bins" instead of "Information Gain" or "Chi-Square Gain" to prevent the formation of bins with too small differences. This allows users to intuitively understand the size of the differences between bins. This feature is also supported for categorical variables.
  4. Do not replace the minimum value of the first bin with negative infinity, nor the maximum value of the last bin with positive infinity. This ensures that outliers are not masked by extending extreme values to infinity. RASC also provides a comprehensive mechanism to handle online values exceeding modeling boundaries. This resolves the common contradiction between the need to detect outliers as early as possible during data analysis and the need to mask them in online applications to prevent process bottlenecks (while still providing timely alerts).
  5. The concept of wildcards is introduced to solve the problem that the online values of categorical variables exceed the modeling value range.
  6. Support multi-process parallel computing.
  7. Support binning of weighted samples.
  8. Support left closed and right open binning.

In most cases, users do not need to interact directly with Bins components. However, rascpy is designed to be pluggable, so advanced users can use Bins modules independently, just like any other Python module.

2.StepwiseRegressionSKLearn

It is a Python implementation of linear bidirectional stepwise regression and logistic bidirectional stepwise regression, adding the following features compared to traditional bidirectional stepwise regression:

  1. During stepwise variable selection for logistic regression, metrics such as AUC, KS, PROFIT, and LIFT (under development) can be selected as alternatives to AIC and BIC. For certain business scenarios, AUC and KS are more aligned with business needs. For example, in ranking tasks, models built using the KS metric, based on past experience, tend to use fewer variables while maintaining KS performance on the test set without degradation. The PROFIT metric can guide model training based on business profitability.
  2. During stepwise variable selection, it supports using other datasets to compute model evaluation metrics instead of using the modeling dataset. Especially when dealing with large datasets that include a validation set in addition to training and test sets, it is recommended to use the validation set to compute evaluation metrics to guide variable selection. This helps reduce overfitting.
  3. It supports using a subset of data to compute model evaluation metrics to guide variable selection. Example scenario: If a business requires maintaining a certain approval rate of N%, it is sufficient to minimize the bad event rate among the top N% of samples, without involving all samples in the computation. Based on past experience, in appropriate scenarios, variables selected using a subset of data for evaluation metrics tend to be fewer than those selected using all data, while the metrics of interest to the user do not degrade on the test set. This is because the model focuses only on the more easily distinguishable samples at the top, requiring fewer variables to achieve business goals.
  4. It supports setting multiple conditions that variables must simultaneously satisfy to enter the model, integrating variable selection with model diagnostics to avoid repeated modeling due to diagnostic failures. Built-in conditions include P-Value, VIF, correlation coefficient, coefficient sign, and the number of variables entering the model within a group.
  5. It supports specifying variables that must be included in the model. If a mandatory variable conflicts with the conditions in point 4, a well-designed mechanism is in place to resolve it.
  6. The modeling process is output to Excel, recording the reason for each variable's removal and the process details of each stepwise regression round.
  7. It allows users to specify the number of variables that can enter the model from each variable group.
  8. It supports the scikit-learn interface and can be used in scikit-learn pipelines.
  9. It supports using the PROFIT metric to guide the training of logistic regression models. By setting the profit for each sample, the cost per feature, and the approval rate, the total profit of the samples can be calculated, and the model is built by maximizing the total profit. Why use profit to build a model? Consider a simulation example: Suppose there are 100 good users, each bringing in $101 in profit, and 10 bad users, each causing a loss of $10. If a model performs exceptionally well, identifying all 10 bad users at the cost of misclassifying only one good user as bad, the model’s KS is 0.99. However, from the company’s profit perspective, not using the model would result in an additional $1 profit. This example illustrates that metrics at the model level sometimes fail to accurately reflect business-level metrics.

In most cases, users do not need to interact directly with the StepwiseRegressionSKLearn component. However, since rascpy adopts a plug-and-play design, advanced users can use the StepwiseRegressionSKLearn module independently, just like any other Python module.

3.Cutter

Perform equal frequency segmentation or segmentation according to specified split points, which has the following enhancements over the built-in segmenters of Python or Pandas:

  1. A mathematically provable analytical solution with minimum global error.
  2. All split points are derived from the original data. The minimum and maximum values for each interval are derived from the original data. This is different from Python or Pandas built-in splitters, which modify the minimum and maximum values at each end of each group.
  3. More humane support for left closed and right open: the last group is right closed.
  4. A globally optimal segmentation solution can be given even for extremely skewed data.
  5. Support weighted series.
  6. Supports user-specified special values. Special values are grouped separately, and users can also combine multiple special values into one group through configuration.
  7. Users can specify how to handle None values. If not specified and the sequence contains null values, the null values will be automatically grouped together.
  8. When a sequence is split using a specified split point, if the maximum or minimum value of the sequence exceeds the maximum or minimum value of the split point, the maximum and minimum values of the split point will be automatically extended.

It is recommended to try using Cutter to replace the built-in equal frequency segmentation component of Python or Pandas.

4. Other modules

There are also other modules that can significantly improve the accuracy and efficiency of data analysis and modeling: The rascpy.Impute package can handle data with multiple special values and null values (binary classification tasks). This solves the current problem of using Impute to treat special values as None or as normal values, which can result in information loss or render the model meaningless.

  1. Provides a high-precision, high-dimensional stratified sampling function. It addresses the current issue where low sampling precision forces a reduction in model performance to minimize the discrepancy between training set metrics and test set metrics.rascpy.Sampling can narrow the gap between training and test set metrics by reducing distribution differences across datasets without compromising model performance.
  2. Provides automatic parameter tuning for xgboost. rascpy.Tree.auto_xgb differs from other automatic parameter tuning frameworks in that it can reduce the model variance while maintaining high training set metrics.
  3. Support scorecard and xgboost rejection inference.
  4. In addition to manually calling the above modules, users can choose to use AI instructions to automatically complete modeling without supervision.

Usage Tutorial

Scorecard Development Example

from rascpy.ScoreCard import CardFlow
if __name__ == '__main__': # Windows must write a main function (but not in jupyter), Linux and MacOS do not need to write a main function
    # Pass in the command file
    scf = CardFlow('./inst.txt')
    # There are 11 steps in total: 1. Read data, 2. Equal frequency binning, 3. Variable pre-filtering, 4. Monotonicity suggestion, 5. Optimal binning, 6. WOE conversion, 7. Variable filtering, 8. Modeling, 9. Generate scorecard, 10. Output model report, 11. Develop rejection inference scorecard
    scf.start(start_step=1,end_step=11)# will automatically give the score card + the score card for rejection inference
    
    # You can stop at any step, as follows:
    scf.start(start_step=1,end_step=10)#No scorecard will be developed for rejection inference
    scf.start(start_step=1,end_step=9)#No model report will be output
        
    # If the results of the run have not been modified, there is no need to run again. As shown below, steps 1-4 that have been run will be automatically loaded (will not be affected by restarting the computer)
    scf.start(start_step=5,end_step=8)
        
    # You can also omit start_step and end_step, abbreviated as:
    scf.start(1,10)

After each step of scf.start is completed, a lot of useful intermediate data will be retained. This data will be saved in the work_space specified in inst.txt as pkl. Users can manually load and access this data at any time. It can also be called through the CardFlow object instance. The intermediate results generated after each step is completed are:

  • step1: scf.datas
  • step2: scf.train_freqbins,scf.freqbins_stat
  • step3: scf.fore_col_indices,scf.fore_filtered_cols
  • step4: scf.mono_suggests,scf.mono_suggests_eventproba
  • step5: scf.train_optbins,scf.optbins_stat
  • step6: scf.woes
  • step7: scf.col_indices,scf.filtered_cols,scf.filters_middle_data,scf.used_cols
  • step8: scf.clf.in_vars,scf.clf.intercept_,scf.clf.coef_,scf.clf.perf,scf.clf.coef,scf.clf.step_proc,scf.clf.del_reasons
  • step9: scf.card
  • step11:scf.rejInfer.train_freqbins,scf.rejInfer.freqbins_stat, scf.rejInfer.fore_col_indices,scf.rejInfer.fore_filtered_cols, scf.rejInfer.mono_suggests,scf.rejInfer.mono_suggests_eventproba, scf.rejInfer.train_optbins,scf.rejInfer.optbins_stat,scf.rejInfer.woes, scf.rejInfer.col_indices,scf.rejInfer.filtered_cols,scf.rejInfer.filters_middle_data,scf.rejInfer.used_cols scf.rejInfer.clf.in_vars,scf.rejInfer.clf.intercept_,scf.rejInfer.clf.coef_,scf.rejInferclf.perf,scf.rejInferclf.coef,scf.rejInfer.clf.step_proc,scf.rejInfer.clf.del_reasons And store the newly synthesized dataset for rejection inference in scf.datas['rejData']['__synData']
    load_step
# load_step is only loading without execution. If your Python program is closed after execution and needs to be read again, there is no need to run it again. Just load the previous result. Even if the user closes the Python kernel or restarts the computer, the user can easily restore the CardFlow instance and call the intermediate data.
# load_step avoids the trouble of loading pkl to obtain intermediate data. CardFlow instance is equivalent to an intermediate data management container.
# For example: load all steps 5 and before, and then call them through scf.xx
from rascpy.ScoreCard import CardFlow
scf = CardFlow('./inst.txt')
scf.start(load_step = 5)
print(scf.datas)
print(scf.train_optbins)

Command file example

[PROJECT INST]
model_name = Test
work_space = ../ws
no_cores = -1

[DATA INST]
model_data_file_path = ../data/model
oot_data_file_path = ../data/oot
reject_data_file_path = ../data/rej
sample_weight_col = sample_weight
default_spec_value = {-1}

[BINS INST]
default_mono=L+
default_distr_min=0.02
default_rate_gain_min=0.001
default_bin_cnt_max = 8
default_spec_distr_min=${default_distr_min}
default_spec_comb_policy = A

[FILTER INST]
filters = {"big_homogeneity":0.99,"small_iv":0.02,"big_ivCoV":0.3,"big_corr":0.8,"big_psi":0.2}
filter_data_names = {"big_homogeneity":"train,test","small_iv":"train,test","big_ivCoV":"train,test","big_corr":"train","big_psi":"train,test"}

[MODEL INST]
measure_index=ks
pvalue_max=0.05
vif_max=2
corr_max=0.7
default_coef_sign = +

[CARD INST]
base_points=600
base_event_rate=0.067
pdo=80

[REPORT INST]
y_stat_group_cols = data_group
show_lift = 5,10,20

[REJECT INFER INST]
reject_train_data_name = rej
only_base_feas = True

Detailed description of all instructions

English all_instructions_detailed_desc.txt
中文全部指令详细说明.txt

Optimal binning example

In the scorecard development example, rascpy.Bins.OptBin/OptBin_mp is automatically called through CardFlow. Users can also manually call OptBin/OptBin_mp to build their own modeling solutions.

# OptBin_mp is a multi-process version of OptBin
from rascpy.Bins import OptBin,OptBin_mp
# Main parameter description
# mono: Specifies the monotonicity constraint for each variable, such as: L+ is linearly increasing, U is automatically selected from positive U or negative U, and A is automatically selected from L+, L-, Uu, and Un based on the data. For the value range, see [BINS INST]:mono in "Detailed Description of All Instructions".
# default_mono: Default monotonicity constraint for variables not set in mono
# distr_min: Specify the minimum bin ratio of normal values except special values for each variable
# default_distr_min: If the variable is not configured in distr_min, the default minimum binning ratio of the normal value
# spec_value: specifies the special value of each variable. For the rules of writing special values, see [DATA INST]:spec_value in "Detailed description of all instructions".
# default_spec_value: The default special value of the variable that does not appear in spec_value. When the special value you configured does not exist in a certain variable, the special value configuration will be automatically ignored. This command is very convenient to use when there is a global unified special value in the data.
# spec_distr_min: The minimum percentage of each special value for each variable (when the type is a double-nested dict) or the minimum percentage of all special values for the variable (when the type is a single-layer dict). If the percentage of special values in a variable is too small, the special values are merged using the merging strategy specified by spec_comb_policy. The purpose of special value merging is to reduce abnormal results caused by special values with too small a percentage.
# default_spec_distr_min: If the variable is not in spec_distr_min, the default minimum proportion of special values under the variable. The value can be a dict (to configure the default minimum proportion for each special value separately) or a number (all special values use the same default proportion)
# spec_comb_policy: Specifies the merging rule for special values for each variable. When the proportion of the special value is less than the threshold specified by spec_distr_min, the merging rule is used. For the value range, see [BINS INST]:spec_comb_policy in "Detailed Description of All Instructions".
# default_spec_comb_policy: If the variable is not configured in spec_comb_policy, the default special value merging rule is used. If the variable has no special value, this parameter is automatically ignored.
# order_cate_vars: Specifies the ordered categorical variables in the data and gives the order of each category. ** represents a wildcard character; all unconfigured categories are merged into the wildcard character. Wildcards are well-suited for variables with long-tail distributions. If the order of a variable is set to None, lexicographic order is used.
# unorder_cate_vars: Specifies the unordered categorical variables in the data. Unordered categories will be sorted according to the event rate. Each variable corresponds to a float: if the proportion of the category is less than the threshold, it will be merged into the wildcard category. The corresponding value of the variable is None: no limit on the proportion (may cause large fluctuations)
# no_wild_treat: When a categorical variable does not have a wildcard and an uncovered category appears in the data set, the category is handled. For the value range, see [CATE DATA INST]: no_wild_treat in "Detailed Description of All Instructions".
# default_no_wild_treat: If there is no variable configured in no_wild_treat, the default handling method for this category will be used if an uncovered category occurs.
# cust_bins: User manually bins the variable
# cores: The number of CPU cores used by multiple processes. None: All cores int: When it is greater than 1, it specifies the number of cores to be used. When it is less than 0, it specifies the number of cores reserved for the system, that is, all cores minus the specified number of cores. When it is equal to 1, it turns off multiple processes and uses a single process, which is equivalent to calling OptBin
if __name__ == '__main__':# Windows must write the main function, Linux and MacOS do not need to write the main function
    optBins = OptBin_mp(X_dats,y_dats,mono={'x1':'L+','x2':'U'},default_mono='A',
                        distr_min={'x1':0.05},default_distr_min=0.02,default_rate_gain_min=0.001,
                        bin_cnt_max={'x2':5},default_bin_cnt_max=8,
                        spec_value={'x1':['{-999,-888}','{-1000,None}']}, default_spec_value=['{-999,-888}','{-1000}'],
                        spec_distr_min={'x1':{'{-1000,None}':0.01,'{-999,-888}':0.05},'x2':0.01},default_spec_distr_min=0.02,
                        spec_comb_policy={'x2':'F','x3':'L'},default_spec_comb_policy='A',
                        order_cate_vars={'x7':['v3','v1','v2'],'x8':['v5','**','v4'],'x9':None},
                        unorder_cate_vars={"x10":0.01,"x11":None},no_wild_treat={'x10':'H','x11':'m'},default_no_wild_treat='M',
                        cust_bins={'x4':['[1.0,4.0)','[4.0,9.0)','[9.0,9.0]','{-997}','{-999,-888}','{-1000,None}']},cores=-1)

Bidirectional stepwise logistic regression example

In the scorecard development example, rascpy.StepwiseRegressionSKLearn.LogisticReg is automatically called through CardFlow. Users can also manually call LogisticReg to build their own modeling solutions.

from rascpy.StepwiseRegressionSKLearn import LogisticReg
# Generate data: There are 10 variables in total, of which the first 4 are useful variables, the middle 2 are redundant variables (there is collinearity with the first 4 variables), and the last 4 are useless variables. Add appropriate noise
X, y = make_classification(n_samples=10000,n_features=10,n_informative=4,n_redundant=2,shuffle=False,random_state=random_state,class_sep=2)
# Convert X to a DataFrame and modify the column names to match the variable effects. rascpy.StepwiseRegressionSKLearn.LogisticReg can only accept DataFrame as X
X = pd.DataFrame(X,columns=['informative_1','informative_2','informative_3','informative_4','redundant_1','redundant_2','useless_1','useless_2','useless_3','useless_4'])
# Convert y to Series. rascpy.StepwiseRegressionSKLearn.LogisticReg can only accept Series as y
y=pd.Series(y).loc[X.index]
# Main parameter description
# measure: A metric used to determine whether a parameter should be entered into the model. It supports aic, bic, roc_auc, ks, and other indicators.
# pvalue_max: The pvalue of all model variables cannot exceed this value. rasc has designed a complex and reasonable mechanism to ensure that the pvalue of all model variables is not greater than this value.
# vif_max: The vif of all input variables cannot exceed this value. rasc has designed a complex and reasonable mechanism to ensure that the vif of all input variables is not greater than this value.
# corr_max: The pairwise correlation coefficients of all model variables cannot exceed this value. rasc has designed a complex and reasonable mechanism to ensure that the pairwise correlation coefficients of all model variables are not greater than this value.
# iter_num: number of rounds of stepwise regression
# results_save: Output the model effect, information related to the model coefficients, reasons for deleting variables, and details of each round of stepwise regression to an Excel table
# Main attributes:
# Important attributes generated after calling fit
# lr.in_vars: all variables entered into the model
# lr.coef_: coefficients of the model
# lr.intercept_: intercept term of the model
# lr.perf: Summary of model performance
# lr.coef: Detailed summary of model coefficients
# lr.del_reasons: the reason for deletion of each deleted variable
# lr.step_proc: Details of each round of stepwise regression
if __name__ == '__main__':# Windows must write the main function, Linux and MacOS do not need to write the main function
    lr  =  LogisticReg(measure='roc_auc',pvalue_max=0.05,vif_max=3,corr_max=0.8,results_save = 'test_logit.xlsx')
    lr  = lr.fit(X,y)
    hat = lr.predict_proba(X)
    
# Other important parameters
# user_save_cols: variables that users are forced to enter into the model. A complex and reasonable mechanism is designed to handle conflicts between user_save_cols and commands such as pvalue_max, vif_max, and corr_max.
# coef_sign: dict, used to specify the coefficient sign of each variable
# default_coef_sign: When a variable is not in coef_sign, the default value of the variable symbol constraint
# cnt_in_group: Set the maximum number of variables allowed to be entered into each variable group
# default_cnt_in_group: If a variable group is not set in cnt_group, the default maximum number of variables allowed to be entered into the module

XGB automatic parameter search example

from rascpy.Tree import auto_xgb
# Parameter description
# cands_num: auto_xgb will give a score to each hyperparameter tried during automatic parameter search. The higher the score, the more recommended the model trained with the hyperparameter is. Then the scores are sorted from high to low, and the models with the top cands_num scores are returned.
# In actual use, the model with the highest score (i.e. clf_cands[0]) is the best model in most cases. However, users can still select their favorite model from the candidate models clf_cands[n] according to their preferences.
# cost_time: The running time of auto_xgb. Because the essence of parameter search is a combinatorial explosion, the goal of any algorithm is to find the most likely optimal set of hyperparameters within a limited time. Therefore, the longer cost_time is, the more likely it is to find the optimal set of hyperparameters.
# However, in actual use, the author has found that setting cost_time to 3-5 minutes has yielded the optimal model for most cases. Setting it longer generally fails to yield a higher-scoring model. If the user is dissatisfied with the model, they can try increasing cost_time, but increasing it to more than 8 minutes is not recommended and will likely be ineffective.
# If the user is not satisfied with the bias or variance of the model, the best way is not to increase cost_time, but to try using a more accurate sampling method, such as rascpy.Impute.BCSpecValImpute
# Return value description
# perf_cands: list. Metrics of all candidate models. Each metric contains three pieces of information: train_ks(train_auc), val_ks(val_auc), |train - val| (the absolute value of the difference between the training set and the validation set)
# params_cands: list. Hyperparameters of all candidate models
# clf_cands: list. All candidate models
# vars_cands: list. All candidate model input variables
# Note: The indexes of these 4 return values are relative. If the user decides to use the clf_cands[0] model, he can view the model's metrics through perf_cands[0], the model's hyperparameters through params_cands[0], and the model's input variables through vars_cands[0].
perf_cands,params_cands,clf_cands,vars_cands = auto_xgb(train_X,train_y,val_X,val_y,metric='ks',cost_time=60*5,cands_num=5)
proba_hat = clf_cands[0].predict_proba(X)[:,1]#The columns of X need to completely correspond to the columns during training. Even if a column is not entered into the model, it must be passed into the predict_proba method.
# When making predictions, you can also try to use the more convenient predict_proba
from rascpy.Tool import predict_proba
proba_hat = predict_proba(clf_cands[0],X[vars_cands[0]],decimals=4)#Only the variables to be input into the model need to be passed in, which is very convenient for online systems. And the returned proba_hat is a Series with the same row index as X.

Impute Example

BCSpecValImpute can be used to handle special values and missing values in data for binary classification problems. It can handle special values and missing values for continuous, unordered categorical, and ordered categorical variables. BCSpecValImpute can simultaneously fill in empty values and transform special values. If the data contains both null values and special values, most models cannot handle them well (in a business-friendly way). We recommend using rascpy.Impute.BCSpecValImpute to preprocess the data before training it in a binary classification model.

from rascpy.Impute import BCSpecValImpute
# Main parameter description
# spec_value: specifies the special value of each variable. For the rules of writing special values, see [DATA INST]:spec_value in "Detailed description of all instructions".
# default_spec_value: The default special value of the variable that does not appear in spec_value. When the special value you configured does not exist in a certain variable, the configuration of the special value will be automatically ignored. This command is very convenient to use when there is a global unified special value in the data.
# order_cate_vars: Specifies the ordered categorical variables in the data and gives the order of each category. ** represents a wildcard character; all unconfigured categories are merged into the wildcard character. Wildcards are well-suited for variables with long-tail distributions. If the order of a variable is set to None, lexicographic order is used.
# unorder_cate_vars: Specifies the unordered categorical variables in the data. Unordered categories will be sorted according to the event rate. If the value is float, if the proportion of the category is less than the threshold, it will be merged into the wildcard category. If the value is None, there is no limit on the proportion (which may cause large fluctuations)
# impute_None: Whether to fill in null values. Because some models can automatically handle null values, if you use such a model later, you can ignore null values when filling, and only need to handle special values. (Almost all models cannot directly handle data with both null values and special values)
bcsvi = BCSpecValImpute(spec_value={'x1':['{-999,-888}','{-1000,None}']
        ,'x11':['{unknow}']},default_spec_value=['{-999}','{-1000}'],
        order_cate_vars={'x8':['v5','**','v4'],'x9':None},
        unorder_cate_vars={"x10":0.01,"x11":None},impute_None=True,cores=None)
bcsvi.fit(trainX,trainy,weight=None) # weight=None can be omitted
trainX = bcsvi.transform(trainX)
# trainX = bcsvi.fit_transform(trainX,trainy)
testX = bcsvi.transform(testX)
    
#View the specific filling rules:
print(bcsvi.impute_values)
#Output format: {'x1':{-999:2,-888:1,-1000:0,None:0},'x2':{-999:1,-1000:0},'x8':{None:'D'},'x11':{'unknow':'A'}}}
#From the results, we can see that the special value -999 of the numeric variable x1 is filled with 2, and the empty value is filled with 0, etc. The special value 'unknow' of the categorical variable x11 is filled with A
#If the key corresponding to a variable name is not found in the first-level dict, it means that the variable has no special value in the training set and does not need to be filled. (However, it is necessary to avoid the situation where special values exist in other datasets)

High-dimensional stratified sampling example

rascpy.Sampling.split_cls provides a high-precision, high-dimensional stratified sampling function.

The stratified sampling built into machine learning is relatively simple, as it only performs stratification based on the target variable. This can lead to a situation where, after certain variables in the training and test sets are discretized into equal-frequency bins using the same nodes, the event rate of the target differs significantly between the two datasets. As a result, it becomes difficult to minimize the discrepancy in metrics between the training and validation sets during modeling. Without high-precision, high-dimensional stratified sampling, the only way to address this is to reduce model performance in order to increase model generalization. Another phenomenon is that after binning, the binning results for the training and validation sets show considerable differences, reflected in a substantial gap in Information Value (IV). The stratified sampling method provided by rascpy.Sampling.split_cls has been tested and shown to significantly alleviate this issue. It reduces the distribution differences between the split datasets without compromising model performance on the training set or the IV of the binned variables, thereby narrowing the performance gap between the training and validation sets. (Excessive performance differences of a model across different datasets are often caused by inconsistencies in high-dimensional joint distributions; however, due to the limitations of sampling precision, this is often treated as overfitting.)

The most important criterion for evaluating the effectiveness of a high-dimensional stratified sampling algorithm is whether the joint distribution of each individual variable and the target remains consistent across all split datasets—that is, the datasets satisfy the assumption of identical distribution. If the data itself needs to be further divided into multiple cross-groups, then it is also required that the joint distribution of each individual variable and the target remains consistent within each cross-group across all split datasets.

rascpy.Sampling.split_cls is specifically designed for high-precision sampling in binary classification problems. Compared with multiple sampling algorithms, whether the data contains grouping (or cross-grouping) or not, it consistently performs well in maintaining the consistency of joint distributions across the split datasets.

from rascpy.Sampling import split_cls
# Main parameter description
# dat:dataframe dataset
# y:y column name of the label
# test_size: sampling ratio
# w: column name of weight
# groups: data grouping fields
train,test = split_cls(dat,y='y',test_size=0.3,w='weight',groups=['c1','c2'],random_state=0)
dat_train = dat.loc[train.index]
dat_test = dat.loc[test.index]

Scorecard Rejection Inference Model example

There are three methods for developing scorecard rejection inference models. Users can choose any method based on their own situation. Method 1: Complete the normal scorecard and rejection inference scorecard simultaneously. Suitable for developing scorecards from scratch

from rascpy.ScoreCard import CardFlow
if __name__ == '__main__':# Windows must write the main function, Linux and MacOS do not need to write the main function
    # Pass in the command file
    scf = CardFlow('./inst.txt')
    scf.start(start_step=1,end_step=11)# will automatically generate standard scorecards and rejection inference scorecards

Method 2: Complete the standard scorecard first, then generate the rejection inference scorecard. This is suitable for those who have already generated the standard scorecard with rascpy and need to generate a rejection inference scorecard.

from rascpy.ScoreCard import CardFlow
if __name__ == '__main__':
    # Pass in the command file
    scf = CardFlow('./inst.txt')
    scf.start(start_step=11,end_step=11)#If you have already run step 1 to step 10, you can set both start_step and end_step to 11 to generate a rejection inference scorecard.

Method 3: Directly call the CardRej module. This is suitable for those who have developed a scorecard using other python packages and then use rascpy to generate a rejection inference scorecard.

from rascpy.ScoreCardRej import CardRej
if __name__ == '__main__':
    # Main parameter description
    # init_clf: unbiased logistic regression model
    # init_optbins_stat_train: Unbiased bin statistics. Format: {'x1':pd.DataFrame(columns=['bin','woe'])}
    # datas: data passed in by the user. Format example: {'rejData':{'rej':pd.DataFrame(),'otherRej':pd.DataFrame()},'ootData':{'oot1':pd.DataFrame(),'oot2':pd.DataFrame()}}
    # inst_file: Instruction file. The instructions are the same as those in the 'Scorecard Development Example'. See "Detailed Instructions for All Instructions". If datas is empty, all data files under [DATA INST]:xx_data_file_path in the inst_file file will be automatically loaded. If datas is not empty, the configuration of [DATA INST]:xx_data_file_path will be ignored.
    cr = CardRej(init_clf,init_optbins_stat_train,datas=None,inst_file='inst.txt')
    cr.start()

Refer to the intermediate data generated by the rejection inference in step 11 of the "Scorecard Development Example". The intermediate data is called by scf.rejInfer.xx in Method 1 and Method 2, and by cr.xx in Method 3.

Tree Rejection Inference Model

from rascpy.TreeRej import auto_rej_xgb
# Main parameter description
# xx_w: weight of the corresponding dataset
# metric: two options, ks or auc
# Return value description
# not_rej_clf: non-rejection inference xgb model
# rej_clf: reject the inferred xgb model
# syn_train: synthetic data used to train the final round of rejection inference model
# syn_val: synthetic data used to validate the final round of rejection inference model
not_rej_clf,rej_clf,syn_train,syn_val = auto_rej_xgb(train_X,train_y,val_X,val_y,rej_train_X,rej_val_X,train_w=None,val_w=None,rej_train_w=None,rej_val_w=None,metric='auc')

Report Beautification example

Beautify a single DataFrame (Series) and export it to Excel.

df1 = pd.DataFrame(np.random.randn(10, 4), columns=['A1', 'B1', 'C1', 'D1'])
df1['SCORE_BIN']=['[0,100)','[100,200)','[200,300)','[300,400)','[400,500)','[500,600)','[600,700)','[700,800)','[800,900)','[900,1000]']
r,c = bfy_df_like_excel_one(df1,'df1.xlsx',title=['DF1','any'],percent_cols=df1.columns,color_gradient_sep=True,text_lum=0,row_num=2,col_num=2)#
print(r,c)

Beautify multiple DataFrames (Series) and export them to the same Excel sheet with proper layout.

from rascpy.Report import bfy_df_like_excel
df_grid=[]
df_rows1=[]#put a df outputted to 1 row in this list
df_rows2=[]#put a df outputted to 2 row in this list
df_rows3=[]#put a df outputted to 3 row in this list
df_rows4=[]#put a df outputted to 4 row in this list
#Note: The row here is not the concept of "row" in Excel
df_grid.append(df_rows1)
df_grid.append(df_rows2)
df_grid.append(df_rows3)
df_grid.append(df_rows4)

df1 = pd.DataFrame(np.random.randn(10, 4), columns=['A1', 'B1', 'C1', 'D1'])
df1['SCORE_BIN']=['[0,100)','[100,200)','[200,300)','[300,400)','[400,500)','[500,600)','[600,700)','[700,800)','[800,900)','[900,1000]'
#Output df1 to the first table in the first row
df_rows1.append({'df':df1[['SCORE_BIN','A1', 'B1', 'C1', 'D1']],'title':['notAnum'],'percent_cols':df1.columns,'color_gradient_sep':True})

df2 = pd.DataFrame(np.random.randn(2, 4), columns=['A2', 'B2', 'C2', 'D2'])

#Output df2 to the second table in row 1
df_rows1.append({'df':df2,'color_gradient_cols':['A2','C2'],'title':['percent for BC','gradient_color for AC'],'percent_cols':['B2','C2']})

df3 = pd.DataFrame(np.random.randn(15, 4), columns=['A3', 'B3', 'C3', 'D3'])
#Output df3 to the first table in the second row
df_rows2.append({'df':df3,'color_gradient_cols':['B3'],'title':['red_max==True'],'red_max':True})

df4 = pd.DataFrame(np.random.randn(4, 5), columns=['A4', 'B4', 'C4', 'D4', 'E4'])
#Output df4 to the second table in the second row
df_rows2.append({'df':df4,'color_gradient_sep':False,'text_lum':0.4,'title':['text_lum=0.4']})

df5 = pd.DataFrame(np.random.randn(10, 15), columns=map(lambda x:'Col_%d'%x,range(1,16)))
#Output df5 to the first table in the third row
df_rows3.append({'df':df5,'color_gradient_sep':True,'title':['not_align']})

df6 = pd.DataFrame({'A6':[1,2,3,4,None],'B6':[0.1,1.2,100.5,7.4,1]})
#Output df6 to the first table in row 4
df_rows4.append({'df':df6,'color_gradient_sep':True,'percent_cols':['A6']})

df7 = pd.DataFrame({'A7':[1,2,3,4],'B7':[0.1,1.2,100.5,7.4]})
#Output df7 to the second table in row 4
df_rows4.append({'df':df7,'not_color_gradient_cols':df7.columns,'title':['ALL not_color_gradient']})

#Two output methods, choose one according to actual needs
r,c = bfy_df_like_excel(df_grid,'pivot.xlsx',sheet_name='demo',default_red_max=False,row_num=4,col_num=4,row_gap=2,col_gap=2,align=True,ex_align=[2])
# or
with pd.ExcelWriter('pivot.xlsx') as writer:
    r,c = bfy_df_like_excel(df_grid,writer,sheet_name='demo',default_red_max=False,row_num=4,col_num=4,row_gap=2,col_gap=2)
print(r,c)

Multi-language switching

  1. Automatic switching: rascpy will automatically switch based on the operating system's language.
  2. Currently, Chinese and English are supported. Users can extend their own languages and switch manually.
  3. Manual Permanent Switching: To manually switch, modify the xx/Lib/site-packages/rascpy/init.py file, where xx is the installation location of Python. Refer to the comments in init.py for the switching method.

Contact Information

Email: scoreconflow@gmail.com
Email:scoreconflow@foxmail.com
WeChat:SCF_04

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

rascpy-2026.4.2-cp314-cp314-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.14Windows x86-64

rascpy-2026.4.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (11.4 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.17+ x86-64

rascpy-2026.4.2-cp314-cp314-macosx_10_15_universal2.whl (3.4 MB view details)

Uploaded CPython 3.14macOS 10.15+ universal2 (ARM64, x86-64)

rascpy-2026.4.2-cp313-cp313-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.13Windows x86-64

rascpy-2026.4.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (11.6 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

rascpy-2026.4.2-cp313-cp313-macosx_10_13_universal2.whl (3.4 MB view details)

Uploaded CPython 3.13macOS 10.13+ universal2 (ARM64, x86-64)

rascpy-2026.4.2-cp312-cp312-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.12Windows x86-64

rascpy-2026.4.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (11.4 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

rascpy-2026.4.2-cp312-cp312-macosx_10_13_universal2.whl (3.4 MB view details)

Uploaded CPython 3.12macOS 10.13+ universal2 (ARM64, x86-64)

rascpy-2026.4.2-cp311-cp311-win_amd64.whl (1.8 MB view details)

Uploaded CPython 3.11Windows x86-64

rascpy-2026.4.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (10.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

rascpy-2026.4.2-cp311-cp311-macosx_10_9_universal2.whl (3.6 MB view details)

Uploaded CPython 3.11macOS 10.9+ universal2 (ARM64, x86-64)

rascpy-2026.4.2-cp310-cp310-win_amd64.whl (1.7 MB view details)

Uploaded CPython 3.10Windows x86-64

rascpy-2026.4.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (9.9 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

rascpy-2026.4.2-cp310-cp310-macosx_10_9_universal2.whl (3.6 MB view details)

Uploaded CPython 3.10macOS 10.9+ universal2 (ARM64, x86-64)

rascpy-2026.4.2-cp39-cp39-win_amd64.whl (1.8 MB view details)

Uploaded CPython 3.9Windows x86-64

rascpy-2026.4.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (9.8 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

rascpy-2026.4.2-cp39-cp39-macosx_10_9_universal2.whl (3.6 MB view details)

Uploaded CPython 3.9macOS 10.9+ universal2 (ARM64, x86-64)

File details

Details for the file rascpy-2026.4.2-cp314-cp314-win_amd64.whl.

File metadata

  • Download URL: rascpy-2026.4.2-cp314-cp314-win_amd64.whl
  • Upload date:
  • Size: 1.5 MB
  • Tags: CPython 3.14, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for rascpy-2026.4.2-cp314-cp314-win_amd64.whl
Algorithm Hash digest
SHA256 41ce4e1f889a976304a7b9114949e7b24be4cbb00944047644c3390056de9dff
MD5 dc18f357dd32ec7f95a3d21e69f58517
BLAKE2b-256 4b555ec76235612c6bca165a433acc95c0f386585f559a468f0fd90d54f8bc37

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 57947140a18ce8e17ddb9738b2e7e32edc2d6ebdf79ec30e4d79ea3e15c94c7d
MD5 166575cca2072a51d10b7bc674034b0f
BLAKE2b-256 8c1f1a3193fe0ca614bfcacd81def35b847421e1ff09357973e4d9e05158bb45

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp314-cp314-macosx_10_15_universal2.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp314-cp314-macosx_10_15_universal2.whl
Algorithm Hash digest
SHA256 ceeccfecddf19bad27640fa26f2e7f52ffc237fedd7e08321f8e7e7d9f314e94
MD5 ff469ea21e98e78a50032874cf3428b7
BLAKE2b-256 9f012eff3c3ac7f552682feb09f4d493f0bfc74a52200eae5ca0c35444e44c85

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: rascpy-2026.4.2-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for rascpy-2026.4.2-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 8c03e6c72e0b3b59dc812d3afcde7dc1f0ff8bde2909e2b5b43f4c6750e63554
MD5 57768e12bc75a273c1184cbe2df2d256
BLAKE2b-256 557314b45b495c6e875bd355436c899e6cf9fcd23fcbfbcbd1d03d83fa8a7030

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 8279dbb40ed7416b43edab24cf8225f51e9104e980f01a7704bf29310833df97
MD5 03c1ed20722b7a42b512d41231933ee4
BLAKE2b-256 5250711bdc460eaf1a39bca4a5a96474d6206f00863b2ca992b57b506f5d9303

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp313-cp313-macosx_10_13_universal2.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp313-cp313-macosx_10_13_universal2.whl
Algorithm Hash digest
SHA256 3e3264cd077d8bc7c75f8056054519b7931c39d93c113f5c1521a8d10bf7e469
MD5 33ffc22d614aa60cbae99a426b68a56d
BLAKE2b-256 33d5a0ad1afea5844da347a2ac3c5a387257d4db76f02c9ac0c9a3c64c6b1a78

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: rascpy-2026.4.2-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for rascpy-2026.4.2-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 d310415fd7970b75bb30738af2a12b6b8bc717eb2b0f76191292415b686d697f
MD5 ee67008275438b4ef4607e07953e83fb
BLAKE2b-256 f1ad1992d39d1dea14639cbc79e434e4b67404ee2e8c88fd1b8e731dc160203a

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 6b649c14b04d85266a6be86b45d235d1c69c5618681584816567b4ffc6f89e28
MD5 6b3094bcf61c157db5e68e877953e7f1
BLAKE2b-256 efd7ad37c9035432ab2f4f5c4d2fe65648a3a415ff862314a6856b2775b5f3ae

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp312-cp312-macosx_10_13_universal2.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp312-cp312-macosx_10_13_universal2.whl
Algorithm Hash digest
SHA256 a3e4cbc5136ab63c77d34a8c459e5f4ac22658b9d09fd23dfd881a660ae76873
MD5 961330a87792067ae74e2b1f7cc9ff24
BLAKE2b-256 2569bb888a94c5de12abb3b30beff6b7ddfd1baa8948770252a72bb406e99408

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: rascpy-2026.4.2-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for rascpy-2026.4.2-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 872db423495041157404487da947f50eda3f3790f958ae8991f10209f98fcc5f
MD5 ec1060aa9c8719ae520941671fb7c610
BLAKE2b-256 1422815d78eef6d7437981bbb6b5cf96ca0244321685655cbdd6d5a07b5cd296

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 ebb5dc3011493c24ea4d36e5701df14b431a77f11c3f4d0d88facb2a1db35fd8
MD5 ed73c50278c5b911f29723fc2fc5a354
BLAKE2b-256 426e7a87522e2f400eb0c340ddec6e92d7153541f5be6fd771ca6ad1e9c6fa51

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp311-cp311-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp311-cp311-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 a0bba8da15a7baddbe4a96e64c2346ca84d052be003619aef807a3a6ecc08f1d
MD5 7ae203251108724a87459d0c1428f0a2
BLAKE2b-256 103520e8e6c2950ba2a22791179c193a9e99fde0b753600d307e7c7bba3c0822

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: rascpy-2026.4.2-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 1.7 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for rascpy-2026.4.2-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 c50b2bd467aeef89e33cf9682002420d7db3c607513537b2261679497359339f
MD5 e2cfa5872e56a40e4438a209d8777425
BLAKE2b-256 0473a81a087f9567d0f2e67fcd39528aebd8f741291494f08a2c5750ef96028f

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 eab437a35de660d34730f2b1f590514a89092017ea92e63b62314860175cd607
MD5 f4512119cde7a94dbf9e767d6151dfbc
BLAKE2b-256 64e0627de386fe620aa3112ef0a326badc16033603dcbaf0f24fc83b2b07028e

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp310-cp310-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp310-cp310-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 999118bd3ca1a6dd89ab688364d9223515bafd0bcfef0e1e5a04598e987b6857
MD5 75a3bd2a1dbfda15a5383485f9054967
BLAKE2b-256 39029057894ce29d937b0fe6556893e1e5e5f85b46593e1f16ed5171c6ff1d77

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: rascpy-2026.4.2-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for rascpy-2026.4.2-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 1379d583c79520a6675d0e829811d218818e78ad11b53e9756d548f945068e20
MD5 10b36641aebf60c2587f5543db287f37
BLAKE2b-256 ec297e7c1ebf7d963ea85df4ebd4a60bb6ea262555485b600c573a04b0b67e97

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 3103ad5f462fc7abf611108d1d57b108486b071e352bc1ef5d3fd9a7edd6d0a8
MD5 2342179314464905a1223a4917a4c53c
BLAKE2b-256 60421c901e3f9ecad242c608c2806d8164f9767ef3c02f1c18019f221377543b

See more details on using hashes here.

File details

Details for the file rascpy-2026.4.2-cp39-cp39-macosx_10_9_universal2.whl.

File metadata

File hashes

Hashes for rascpy-2026.4.2-cp39-cp39-macosx_10_9_universal2.whl
Algorithm Hash digest
SHA256 70aa64837ec9f73be52238f00ae4d12495738304d134d3a499dca6d2a8ea9634
MD5 cbbc1c8974d91a9b59fa46fdb4e421f2
BLAKE2b-256 05b809dca755b7a11d6ce640d300b840048e3b98d4a1bf6ce8e3dcd92dd74499

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page