Skip to main content

Arena mine model input analyzor, working with excel input workbook 116D

Project description

Tutorial

Last modification: Alex Z 26/10/2020

1.Introduction

This small package is used to ease the validation process of input data in mine model. It supports the input workbook version 116D and those with the same structures.

As prerequisites, the user should already have the following python packages: Networkx, Pandas, Numpy, Matplotlib.

In terms of usage, it is recommended to apply this package in Jupyter Notebook or other interactive tools. An example file ‘UseCases.ipynb’ is provided in the example directroy, which should be opened in Jupyter Notebook.

2.Import data

To apply this package, first put the python file ‘PlantVisualization.py’ in the same directory and import it.

import Input-Analyzor as ia

Second, import the input workbook by giving a valid path. It will return a dictionary that contains most of the equipment details in Pandas.DataFrame.

data_dict = ia.access_data(r'./src/KD2_116d_l8f8u4_66%.xlsb')

The keys and referring equipment details are as this table below. The data can be viewed easily, for example crushers, by using data_dict[1]

Keys Equipment details
1 Crusher
2 Screen
3 Conveyor
4 Bin
5 Stockpile group
6 Apron feeder
8 Percentage splitter
9 Priority feeder
21 Screen destination
22 Screen percentage

3.Create a directed graph

We can also create a directed graph of all equipment based on the data dictionary we just generated by doing this. In the directed graph, all equipment will be defined as nodes, the connection between will be defined as directed edge.

In the code below, we need to pass the data_dict we just generated and define how many ore types are used in the input workbook. The graph can be saved and read as gpickle file.

ep_map = ia.generate_graph(data_dict, ore_type_num=9)
# save graph as pickle and read it
nx.write_gpickle(ep_map, "KD2_109C.gpickle")
ep_map = nx.read_gpickle("KD2_109C.gpickle")
ep_map = pd.read_pickle('KD2_109C.gpickle')

As the graph is stored as a dictionary, once the graph is generated, you can easily access the information of equipment or the connection between them. For example, if I want to know the process rate of ‘SC2311’, just need to use

ep_map.nodes['SC2311']['rate']

or if you want to know all the attributes of ‘SC2311’, just use

ep_map.nodes['SC2311']

To get all possible out edges from ‘SC2311’, it will also show the percentage going to each edge.

ep_map['SC2311']

Or, to access a specific edge if it exists.

ep_map['SC2311']['CV2244']

4.Mass splits calculation

Calculation

As one of the most useful features :smirk:, the graph can calculate the throughput to each equipment by giving it a source and the input tones.

# calculate split pct from CS75LG to BX2301
pct_split_CS75LG = ia.calculate_split_from_node(ep_map, 'CS75LG', end_node=['BX2301'] ,input_pct=100000, ore_type_id=3, max_sc_visited=1)

The example above calculates the mass split from ‘CS75LG’ to ‘BX2301’ with input 100000. It returns a dictionary with equipment names as keys and split throughput as values.

end_node is a list of nodes where the calculation will stop. The calculation will continue until no sub nodes are found if the end_node is not defined.

ore_type_id also need to be defined because different ore types have different split percentages in screens.

‘max_sc_visited’ means how many times an entity can go through the same screen. If it is set to 0, the calculation will skip all the edges in loops while exiting the screen at the first visit. If it is set to 1, it will only skip the looping edges at the second visit.

Numbers greater than 1 are not recommended as it will significantly increase the calculation time.

Recalculation

The returned mass splits dictionary can be also used in the next calculation, so users can stack many calculations on the same mass splits dictionary. For example, the code below is the calculation continued from the previous one. ‘BX2301’ is used as source node, and the previous mass split dictionary is passed to the next calculation.

# get the start node input in the next calculation
previous_input = pct_split_CS75LG['BX2301']
# make the input of the last node input 0 to prevent double counting
pct_split_CS75LG['BX2301'] = 0

# keep calculating the splits by passing the old dictionary.
pct_split_total = ia.calculate_split_from_node(ep_map, 'BX2301', end_node=[], pct_split=pct_split_CS75LG, input_pct=previous_input, ore_type_id=ore_type, max_sc_visited=1)

This method can be used to reduce the time and memory cost of calculation if there are to many screens and splits. The results of calculation should be used in basic test to make sure the model behave as what is defined in the input workbook.

5.Simplification

For the purpose of simplifying the flow in visualization, a functionality is made to merge a set of equipment into a single node.

# generate simple graph and pct_split by merging equipment
simple_ep_map, merged_pct_split = ia.simplified_graph(ep_map, pct_split=None, print_merged_process=False)

Equipment, that are in the same type, with similar names (the first 4 char) and same source, will be merged to a single node.

For example, BN1311, BN1321… to BN1381, which are fed by the same tripper, will be merged as a new node ‘BN1321-81’. The rates, capacity will be also inherited from the nodes that are merged and multiplied by their number. This process will keep going on until there are no more nodes to merge.

The simplified version of graph should be used in validation and it does not support the mass split calculation. However, the mass split calculation can also be merged by passing it to the function.

6.Visualization

The graph can be visualized on matplotlib.plot.fig.

The code below first positioning all the nodes sourced from ‘CS75LG’ based on DFS.

Next it passes the graph ‘ep_map’, the position dictionary ‘pos_dfs’ and the mass split results ‘pct_split_total’ to the function.

The results is the flow chart below. In the flow chart, green means source while red means destination. Different equipment types will have different node shapes.

It is a fact that the flow chart is messy and need to be improved :sweat_smile:, but hopefully it can still give some knowledge about how the flows go.

# get position dict from a specific start node
pos_dfs = ia.get_pos(ep_map, end_node=['CV2241','CV2153'], h_dis=2, v_dis=2, start_node='CS75LG')

# visualize the connection
fig = ia.draw_map(ep_map, pos_dfs, pct_split=pct_split_total)

output_example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for input-analyzor, version 0.0.2
Filename, size File type Python version Upload date Hashes
Filename, size input_analyzor-0.0.2-py3-none-any.whl (17.1 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size input_analyzor-0.0.2.tar.gz (16.0 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page