A flotationflowsheet should be as simple and flexible as possible, because changes in the ore often necessitate changes in treatment: the more flexible the flotation machine, the easier and quicker the changes can be made, not only in regard to flexibility in arrangement of cells, but also the ability to treat coarse as well as fine material. The flexible points of a Sub-A which should be convinced in flowsheet design are:
Due to the integrated nature of process units within a refinery, change in key operating variables has an impact on overall refinery operation and product blending as well. Refinery-wide flowsheets demonstrate the true representation of this effect as all process units are linked together. Non-linear kinetic or equilibrium models in the flowsheet represent the conversion units. Fractionation models capture the efficiency of separation between different cuts or components. KBC (A Yokogawa Company) has developed and used refinery-wide flowsheets for more than 30 years. Developed in the 1980s, Petrofine was a FORTRAN based tool capable of refinery-wide flowsheeting. In 2004, KBC launched Petro-SIM with additional features to simulate refinery process units. These standalone models are combined to create a complex-wide flowsheet which includes all process units within the refinery. Due to the integrated nature of process units within a refinery, change in key operating variables has an impact on overall refinery operation and product blending as well. Refinery-wide flowsheets demonstrate the true representation of this effect as all process units are linked together. Non-linear kinetic or equilibrium models in the flowsheet represent the conversion units. Fractionation models capture the efficiency of separation between different cuts orcomponents. KBC (A Yokogawa Company) has developed and used refinery-wide flowsheets for more than 30 years. Developed in the 1980s, Petrofine was a FORTRAN based tool capable of refinery-wide flowsheeting. In 2004, KBC launched Petro-SIM with additional features to simulate refinery process units. These standalone models are combined to create a complex-wide flowsheet which includes all process units within therefinery. Petro-SIM is KBCs process simulator used for rigorous modelling of the entire refinery and petrochemical complex, from crude to finished products. Since each unit, including conversion units, is modelled meticulously, the overall simulation suitably reflects the non-linearity of petroleum refining which enables sensitivity analysis over a wide range of operating variables andfeedstocks. The conversion units are based on comprehensive kinetic models that predict the unit yields and product qualities. The kinetic models are calibrated specifically to match the available plant data for a particular unit. This allows the simulation to be representative of the specific units operation, independent of the licensor. Product separation is simulated using fractionation technology that represents current operation and heat balances. Heat-and-material balanced distillation models which use a section-by-section approach rather than simulating each tray are calibrated to plant data. KBC and the companys clients around the world have developed numerous refinery-wide flowsheets. Petro-SIM based flowsheets are being used for identification and evaluation of margin improvement opportunities which include optimisation of stream routings, blending strategies, molecular management, throughput maximisation, feedstock selection, and improvements in the unit operating conditions. Refinery-wide flowsheets have also been used for configuration studies for grassroots and revamp configurations. Flowsheet development Standalone models for the process units are the main building blocks for the refinery-wide flowsheet. Detailed kinetic and equilibrium based Petro-SIM models are calibrated using test run data. Unit configuration, operating parameters from historian and laboratory data are used to calibrate standalone models for the process units. The data are reconciled to close mass, sulphur, nitrogen, carbon, and hydrogen balance. A calibrated process model mimics the performance of the process unit. The models are valid over a wide range of operation as they are based on first principles and are non-linear innature. To understand overall refinery operation, a base month is selected. Base month operation provides an insight into marginal mechanisms in the refinery. The data used for standalone models are based on test run operating conditions and these test runs may have been conducted at a different time period. Due to this, it is essential to prepare a consistent basis for operating conditions of all process units. The following guidelines are used to select the base month for the flowsheet: Crude blend for the month represents the typical crude blend used by the refinery Crude throughput should be close to the typical crude throughput of the refinery Most of the process units in the refinery should be operating at typical capacities and at normal operating conditions Most of the process units should operate continuously in stable conditions Changes in the inventory of intermediate streams should not be significant. One of the major challenges is with regard to inventory. Inventory changes are not simulated in the flowsheet as it represents steady state operation of the refinery. Inventory changes in feed and product are used to estimate the net feed processed and net products produced in the refinery. Inventory changes for intermediate streams affect throughputs of the process units and hence it is essential that the base month operation has minimum inventory changes for theintermediates. Mass balance for different process units may be inconsistent for refineries which do not use data reconciliation tools, for instance, the throughput of a delayed coking unit measured by the feed meter may not be the same as the vacuum residue production measured in the vacuum distillation unit. Validating consistencies for a product which is routed to more than one process unit and blending is more challenging. Due to these issues, data for the base month also requires reconciliation. To build a refinery-wide flowsheet, all standalone models are combined. Crude blend for the crude distillation units (CDU) and throughputs are updated so that the flowsheet feed represents base month operation. Routing strategies used by the refinery are replicated in the flowsheet. During the flowsheet development phase, refinery strategies are used for routings rather than using optimum routings. Sensitivity analysis to validateoptimum strategies is performed after completing the base case which represents as-is refinery operation. Understanding refinery stream routing strategy can be a difficult task if the refinery has multiple trains or two to three process units for a purpose, for instance more than one diesel hydrotreating unit. In this case, routings are fixed based on the feedback from planning engineers and base month data. Petro-SIMs blending unit operation allows flexibility of optimising refinery blends based on prices, product demand, product specifications, and other constraints. Marginal mechanisms used by the refinery must be reflected in the flowsheet product blenders. The flowsheet must hit key specifications for each blend, such as octane for gasoline, flash for diesel, viscosity for fuel oil, and so on. The marginal streams used by the refinery should be reflected in the flowsheet as well. The refiner also needs to identify major changes which are planned in the near future including the revamp of a process unit or a significant shift in the crude basket. The refinery-wide flowsheet may be updated with these changes after which it represents the base case operation of the refinery. The work process to develop the refinery-wide flowsheet is shown in Figure 1.
In flowsheet simulations, a large variety of methods on different levels from the phase level up to the process level have to interact with each other which is illustrated in Fig.1. On the phase level, the state of a unique phase is described. Dependent on the type of the phase, this can be done using an Equation of State or by an activity coefficient model. On the unit level, thermodynamic equilibria of single process units are calculated using the information of the phase level. Typical approaches are the Gibbs energy minimization (Lwin, 2000) in case of chemical equilibria or the calculation of the necessary equilibrium condition, namely the equality of the chemical potentials, in case of phase equilibria (Walas, 1985). In a previous work we introduced a Dynamic Method which is based in the solution of a set of ordinary differential equations (ODE). This method is able to solve chemical and phase equilibria problems as well as simultaneous chemical and phase equilibria (Zinser et al., 2015, 2016b).
In order to compute a process model, the underlying unit models have to be connected according to the flowsheet connectivity and the mass balances in the overall process model have to be solved. This can be done iteratively by the class of tearing methods. This cost-intensive iteration can be eliminated by a simultaneous solution approach, such as the Simultaneous Dynamic Method (Zinser et al., 2016a). If we go one step further in this hierarchy, we are at the optimization level. The purpose of an optimization task is to identify optimal process parameters in order to minimize operating costs. A brief overview of the Dynamic Method is given section 2, its extension to the Simultaneous Dynamic Method is outlined in section 3. The generalization of this approaches towards process optimization problems is introduced in section 4.
The reliability of flowsheet simulation results depends heavily on the knowledge of model parameters, among which are, for example, kinetic parameters of chemical reactions or substance property data describing thermodynamic behavior. These parameters are usually estimated by nonlinear regression with respect to data from a laboratory setup, mini-plant or operating data from a production process. Applying model-based design of experiments (DoE), operating conditions can be identified which maximize the information content of the resulting data for the regression problem. In this contribution, we present gradient-plots in order to make DoE-plans more transparent to the engineer. In a second step, we report on the implementation of an iterative DoE-scheme in a flowsheet-simulator. Thus, the interplay between estimating model parameters, planning experiments, adding resulting experimental data and re-adjusting the parameters is supported.
The underlying models for flowsheet simulation consist of a large number of nonlinear equations and unknowns with only a few degrees of freedom. In commercial flowsheet simulators, the user is typically required to specify a precise value for each degree of freedom. In general, this approach does, however, not reflect the users demands. Product purities, for example, require rather inequality constraints than equality constraints. Also other process quantities, such as recycle compositions or operating conditions, have to be specified albeit they may actually be optimization parameters. The fact that certain values have to be fixed impedes a full exploration of process limitations and restricts the user to a small subset of the solution space. Freeing the optimization variables and changing equality to inequality constraints to extend the feasible solution space is typically done only in a subsequent optimization step which requires repeated calls of the simulation with different parameters.
This work aims at a more direct way toward optimization in flowsheet simulations. The intriguing feature of the approach lies in the embedding of the flowsheet problem in an optimization problem with a small number of optimization variables and constraints. Hence, large-scale optimization solvers are not needed. The embedding is possible due to a suitable decomposition of the entiresystem of equations into the unit operations, as well as tailored decomposition strategies for the different unit operations.
The present work focuses on decomposition strategies for distillation processes which enable numerically stable stage-to-stage calculations using the equilibrium stage model. Stage-to-stage calculations of column profiles are frequently used in literature, especially in combination with the simplifying assumption of constant molal overflow (CMO). One of the earliest papers in this field is the work of Lewis and Matheson (1932). For newer developments, see, e.g. Levy et al. (1985), Van Dongen and Doherty (1985), Levy and Doherty (1986), Julka and Doherty (1990), Zhang and Linninger (2004), Lucia et al. (2006), Lucia et al. (2008), and Petlyuk et al. (2015). In the present work, stage-to-stage calculations are based on rigorous mass, equilibrium, summation and heat (MESH) equations on each stage and among stages. The transition from one stage to the next is provided by solving a fixed-point problem. Combining stage-to-stage calculations with the ideas of the shooting method, which is typically used in order to solve boundary value problems for ordinary differential equations, it is possible to embed distillation processes into an optimization problem. Within this new framework, the user only needs to specify the actual demands on the process. These specifications can be incorporated by adding equality or inequality constraints to the optimization problem or suitable objective functions which attempt to fulfill certain demands instead of being restricted to a fixed number of specified equalities when using commercial systems.
Before the experiments, flowsheet simulations are carried out with flowsheet simulator (ChemCAD) to minimize the solution space, the required number of experiments and to find the proper parameters of the unit operations. The optimal mass flow rates, reflux ratio, heating and cooling requirements can also be determined. UNIQUAC method is applied for the calculation of the highly non-ideal VLE as an equilibrium model. Figure3 shows the ChemCAD model of separation of selected mixtures.
The EHAD separation is carried out in the first column. Second column separates EtAc from Water, and the third column removes the alcohol compound. Among the products there is the ethanol of 95m/m% purity, so below the azeotropic point. In our investigation the current industrial praxis is considered where the dewatering of the ethanol can be completed with molecular sieve (Toth et al., 2016a). The pressure must be risen at 3bar in the second column for corresponding limit value and the other operation units achieve the separation at atmospheric environment.
The EHAD column is examined experimentally in laboratory apparatus. The main parameters of the experimental column are the following: structured packing, internal diameters of 40mm. The column has 10 theoretical plates according to a measurement carried out by methanol-water mixture. The solvent feed enters at the middle of the column. The entrainer (water) is fed in the top of the column, as EHAD philosophy requires. The column heating is controlled with a 300W efficiency heating basket, the phase separator has atmospheric conditions. The flow leaving the condenser goes to a phase split. The upper, organic reach phase is taken away. The lower, water rich phase goes back into the column as reflux (Toth et al., 2016a). The organic content of the feed (F), distillate (D), bottom product (W) are measured with Shimadzu GC2010Plus+AOC-20 autosampler gas chromatograph with a CP-SIL-5CB column connected to a flame ionization detector, EGB HS 600. Headspace apparatus is used for sample preparation. The water content is measured with Hanna HI 904 coulometric Karl Fischer titrator.
Figure 1 shows the simulation flowsheet of lactic acid production process from sucrose as well as the purification of the acid using esterification and hydrolysis reactions, respectively, in reactive distillation systems. The fermentation reaction was conducted at 307.15 K and 1 atm with mass fraction of sucrose and water of 0.4 and 0.6, respectively. The hydrolysis and esterification reactions were carried out at 353.15 K and 393.15 K, respectively.
The full process of PLA polymerization is depicted in Figure 2. To know the best operation conditions of each reactor, it was utilized the sensitivity analysis tool of the simulator. The aim of the first part of this process (Oligomerization) was to obtain PLA oligomers of low molecular weight without catalyst. To achieve the highest mass flow of PLA, the reactor (OLIG-R) must be operated at vapor-liquid phase, temperature 483.15 K, press 2 atm, condensated phase with volume of 30 m3 and residence time 3.3 hr. The stream conditions from OLIG-R are indicated in Table 1.
To produce lactide it is necessary to use another CSTR reactor (LACTID-R) which receives the PLA oligomers stream from the previous reactor, mixed with an appropriate stannous catalyst (LIQUID-2). This reactor must be operated in liquid phase, temperature 493.15 K, pressure 0.0132 atm, condensated phase volume of 40 m3 and residence time 0.42 hr, to produce the highest mass flow of Lactide without increasing the other components. Input and output properties of LACTID-R streams are showed in the two last column of Table 1. The last section of this polymerization process is the high molecular weight PLA production from lactide. Before entering the reactor (PLA-R) the lactide stream (VAPOR-3) is purified removing water and lactic acid to prevent contamination. The purification is carried out in two vacuum distillation columns. It is necessary to control the reagent amounts, because if the lactic acid and water content is very high, it is impossible to achieve a high molecular weight polymer. On the other hand, if the acid lactic concentration is very low, the reaction will be slow and the molecular weight of PLA very high. Figure 3 shows the decrease of PLA molecular weight number (MWN) with the inlet ratio LA/Lactide and the catalyst mass flow. This reaction used another stannous catalyst and a polymer with molecular weight of 25183.6 was obtained. Adjusting the reagents ratios and varying the operation conditions it is possible to control important PLA properties such as MWN. This would allow synthesize PLA polymers with characteristics according their application. PLA reactor reaches the highest mass flow of PLA at temperature 473.15 K, pressure 0.0132 atm, reactor volume of 1 m3, liquid phase and time reaction 0.12 hr.
For stage-2, flowsheet simulations of the two DME processes have been developed in Aspen plus with the design specifications taken from Luyben (2014), and Machado et al. (2014) respectively. The specified production target is 52 mt/h (Chen et al., 2012), while the purity of the product for fuel grade DME is set to be greater than 99.9%. The operating conditions are optimized further through the simulator. The process hot-spots of each processing route are identified by the SustainPro tool, while ECON and LCSoft tools have been used to calculate the economic parameters and the various environmental impact indicators.
Next, analyses of the process alternatives are performed. Two constraints are proposed to scan the feasible processing routes: (1) the use of CO2 as a raw material due to environmental concern; (2) the source for H2 needed for the methanol dehydration reaction. Economic, sustainability and LCA analyses are performed to identify the hotspots of each processing route. The analyses results for DME processing route-A indicate that the furnace has a high carbon footprint (42.89% of total carbon footprint from process) and high utility cost (32.75% of total utility cost) because of the high energy demand for the reforming step.
For DME processing route-B, the compressor in the CO2 hydrogenation step consumes a lot of energy, which is equivalent to 53.65% of total carbon footprint, and 41.74% of total utility cost. The sustainability results show that the waste streams from the DME purification units for routes-A and -B have the highest material value added (MVA). This translates into a loss of valuable materials (methanol) from the process. Based on these analyses, hot-spots are identified and targets for improvements are set, which are given in Table2 for route-A and in Table3 for route-B.
The critical parts of DME production via route-A are the furnace, compressor and waste stream. For route-B the methanol purification unit and compressor are the critical parts. In order to investigate the overall performance of the sustainable process, three improvements are developed for routes -A andB: heat integration, optimized compressor, and addition of a methanol purification unit. These are designed in detail and compared in terms of the performance criteria, as shown in Figure2 in terms of radar plots. Figure2 shows that the developed heat integration case and optimized compressor case can reduce the carbon footprint and utility costs significantly compared to the base case design. For the addition of a MeOH purification unit the economics are improved, but with a trade-off as it requires more energy for purification, resulting in the increase of carbon footprint.
Aspen HYSYS V.8.3 is used for flowsheet simulation. The feedstock, Table 1, is IAs enriched C5 fraction, from FCC gasoline. For simplification, all the inert hydrocarbons (C5 fraction without IAs) are represented by isopentane (iC5). Selected fluid package is UNIQUAC, for liquid phase, and Peng-Robinson for vapour phase. iC5-IAs-MeOH-TAME mixtures non-ideal behaviour is intensively exploited in process conceptual design. Eley-Rideal reaction kinetic model (Rihko and Krause, 1995) with Amberlyst 35 wet (Dow Chemical) cationite resin catalyst is considered. Reaction section (reactors R1 and R2) total MeOH feed is 65kmol/h. Ideal adiabatic Plug Flow Reactor model (PFR) is selected for R1 and R2. For CS3, T1 column is aimed to separate pure TAME from R2 effluent, fed on the 5th stage, at boiling point. Consequently, bottoms product is high purity TAME, while distillate product consists of unreacted MeOH and all hydrocarbons. T1 specifications are: reflux ratio and bottoms TAME molar fraction. T2 and T3 use pressure swing coupled with L-L equilibrium separation, to obtain hydrocarbon stream C5 and pure MeOH stream. T2 column specifications are: reflux ratio, and distillate MeOH fractionThe distillate product is subcooled in V1 to separate two liquid phases (the heavy one, rich in MeOH is part of T3 feed). T3 separates MeOH high purity in bottoms, while distillate is heterogeneous azeotrope hydrocarbons-MeOH. T3 specifications are: C5 recovery in distillate product and bottoms temperature. The distillate is subcooled in V2. MeOH rich fraction is combined with V1 similar product, as T3 feed. T3 bottom product is recycled to reaction section. V1 and V2 light phase is recycled to T2 feed.
The validity of simulation and optimization results based on flowsheet simulation models depends significantly on the quality of these models and their model parameters. The estimation and validation of the parameters is based on experiments that, depending on the complexity of the process, can be cost- and time-intensive. Especially experiments in integrated mini-, pilot or production plants are very expensive. The goal of model-based design of experiments (DoE) is to determine a set of experiments that yields the highest information content, which here means the lowest uncertainty in the model parameters. Since integrated processes are usually modeled in flowsheet simulators, an integration of DoE methods in such simulators is obvious and beneficial. In this contribution, the experience gained by the integration of methods of model-based DoE into BASFs flowsheet simulator CHEMASIM for steady state processes are described. The methods are generic in the sense that in principle any other flowsheet simulator could have been used.
In order to apply the Dynamic Method to an overall flowsheet simulation, the evolution equations for each single unit uU are formulated and extended by source and sink terms for each stream that is connected to the unit:
where (u) refers to the residence time of the unit. The feed streams of multiphase units may be assigned to an arbitrary phase or may be distributed among the phases. This may have a small impact on the computational performance, but not on the steady state solution.
Additionally, one has to make sure, that the fluxes due to chemical reactions and phase transitions must be much faster than the fluxes between the units that result from the flowsheet connectivity. The reason for this is, that the thermodynamic equilibrium assumes either infinite reaction volume or infinite residence time. Both would lead to a cancellation of the sink and source terms which were introduced in this section.
The simulation environment DIVA is a software tool for dynamic flowsheet simulation of chemical processes which has been developed at the University of Stuttgart (Mohl et al., 1997). The plant model is formulated as a linearly implicit system of differential algebraic equations of the type
In addition to methods for steady state and dynamic simulation, optimization, and parameter estimation, DIVA contains a package for numerical bifurcation analysis. The package comprises algorithms for the one-parameter continuation of steady states and periodic solutions as well as for the two-parameter continuation of saddle-node and Hopf bifurcations. The numerics have been tailored to systems of high dynamical order. For a detailed description of the numerical methods the interested reader is referred to (Kienle et al., 1995; Mangold et al., 2000). In the following, only a brief idea of the methods will be given. The central element of the bifurcation package is a continuation algorithm used to trace the solution curve of an under-determined system of algebraic equations
in an (m+1) dimensional space. A predictor-corrector algorithm with local parameterisation and step-size control is used. A simple application of the continuation algorithm is the computation of stable and unstable steady state solutions as a function of some distinguished model parameter p. In this case, g is the right-hand side vector f of the model equations (1), and y consists of the state vector x and the parameter p. An eigenvalue monitor is used to determine the stability of the computed steady states and to detect singular points where one or several eigenvalues cross the imaginary axis. The singularities most frequently encountered in physical systems are saddle-node bifurcations (coincidence of two solutions) and Hopf bifurcations (stability change of steady state solutions and generation of periodic solutions). From bifurcation theory, necessary conditions for the singular points can be derived. Together with the steady state equations of the model they form an augmented equation system for the direct computation of the state vector x and the parameter p at a singular point. The augmented equation systems are generated automatically by DIVA. In the framework of the continuation algorithm, they are used to trace the curves of singular points in two parameters. The resulting curves form the boundary of regions of qualitatively different behaviour in the parameter space. A further application of the continuation algorithm is the continuation of stable and unstable periodic solutions in one parameter. For that purpose, the continuation algorithm is combined with a shooting method adapted to the special demands of high-order systems.
The Beneficiation flowsheet shown with this study is particularly adapted to the concentration of tungsten ore in small tonnages. Tungsten minerals are generally in the friable class and therefore concentrating processes are hindered by the excess amount of fines produced in the crushing and grinding steps. Special consideration must be given to the stage reduction and concentration to avoid excessive losses in the fines. The Mineral Jig plays an extremely important role in this flowsheet due to its ability to handle an unclassified feed, with outstanding recovery in the coarse sizes and substantial recoveries in the fine sizes. A selective high grade product is obtained with a minimum amount of water.
The Crushing section begins with a picking belt feeder of sufficient length to allow convenient inspection and the picking out of high grade or waste prior to crushing. A Vibrating Grizzly is used to remove the fines from the Jaw Crusher feed and thereby increase crushing capacity. The Jaw Crusher discharge is elevated to a Vibrating Screen which efficiently controls and limits the maximum size going to the gravity concentrating circuit. The Vibrating Screen oversize is reduced in the Crushing Rolls. This unit was selected for this flowsheet because it provides efficient reduction with a correspondingproduction of only a comparatively small amount of fines. The discharge from the Crushing Rolls is returned in closed circuit with the Dillon Vibrating Screen.
The Selective Type Mineral Jig is the first machine in the gravity concentration circuit and handles the unclassified Screen undersize. As indicated in the accompanying table, the Mineral Jig shows highly efficient concentration in all sizes of a feed which is minus 10 mesh. Although the Mineral Jig can be adapted for coarser feeds, this particular Jig was fitted with 3 mm. opening lower screens.
The hutch product from the Mineral Jig passes over a cleaner jig producing a high grade tungsten concentrate and a middling product which is then reground. The fines in the middling product are removed in a Dewatering Classifier and the sand product passes to a Peripheral Discharge Rod Mill.
The Peripheral Discharge Rod Mill permits rapid discharge with minimum contact time and therefore this mill is ideal for limiting the amount of fines in the ball mill discharge. After the middling product is quickly ground to liberate additional minerals, it is then re-jigged to recover the additional high-grade tungsten.
In tungsten plants, it is common practice to treat gravity tailings byflotation. Shown below is a characteristic soap flotation froth inaSub-A Flotation Machine recovering ferberite fromgravity plant tails in a Colorado tungsten concentrator.
The rougher jig tailing and the classifier over-flow pass to a Hydraulic Classifier which prepares a sized feed for tabling and slime concentration. The Concentrating Tables produce a high-grade concentrate, a middling product which is returned to the Peripheral Discharge Ball Mill, and a final tailing. The very fine tungsten is concentrated by means of the Buckman Tilting Concentrator which is especially designed for slime concentrations.
A typical tungsten recovery plant inCanada utilizes a 16x 24 Duplex Mineral Jig in a closed grinding circuit. It operates as a rougher unit on a rod mill discharge at 50% solids and produces a concentrate running about 50% WO3. The hutch concentrate is discharged at intervals to a 12x 18 Duplex Mineral Jig which operates as a cleaner unit. The feed to the cleaner jig is 70% solids and the product grades from 72-73% WO3, and amounts to 65% of the gravity concentrate recovered. The cleaner jig tailings return by gravity to join the rod mill discharge and are returned to the rougher jig.
The final jig concentrate is, on an average, made up of 77% scheelite, 8% ferberite, 4% hematite and 11% silicates and oxides. The specific gravity is 5.73. The mesh analysis of the final jig concentrate is as follows:
The ability to recover a high-grade concentrate from an initial coarse unclassified feed, with a minimum amount of slimes, is one of the outstanding features of this flowsheet. Special consideration is given to the friable nature of the tungsten by using Crushing Rolls and the quick pass grinding offered by the Peripheral Discharge Ball Mill. Previous experience in the concentration of tungsten ores, indicates that the fine sizes carry the tungsten losses. For this reason, the Buckman Tilting Concentrator is installed and is particularly adapted to overcoming this loss.
Although not indicated on this flowsheet, tungsten ores are often subjected to magnetic separation and test work will indicate the advisability of using this process on a particular ore. Even scheelite, though not itself amenable to magnetic concentration, is often treated magnetically to remove garnet.
It has been a universal observation that rich grades of ores and minerals are getting depleted, simultaneously the demands for civilisation are ever increasing coupled with occurrence of poor grades of ores and minerals posing a problem to the processing engineers. At the same time, the cost of mining and milling are increasing, creating a situation where the management has to handle and process low grade ores at an economic level keeping environmental stipulations and control to be safeguarded which has been an additional cost for the management to comply with.
It is observed from the mineral processing technologies available that both by gravity concentration, flotation and allied techniques it is not possible to recover substantial amounts of minerals values lost in the tailings and slimes from the processing sector where these rejects contain values in the particle sizes of 0 to 20 microns. The latest technology developed in various countries (which culminated in creation of a class of centrifugal gravity separators such as multi gravity separator. Knelson separator, Falcon concentrator and Kelsey Jig etc. proved their ability to recover the mineral values efficiently in the range of 0 to 20 microns. Now is a situation where processing engineers can programme recovery of values from old tailings, mill rejects, marginal, sub-marginal and lean ores and over burden, which till today have been left over for want of technology, can be projected and processed as a usable source of minerals.
The recovery of tungsten from the rejects, tailings, old dumps and recovery of very finer values in the ranges of 500-1000 ppm present in the granite rock are recovered satisfactorily compared to those results obtained by processing the ores anywhere in the world. The mineral processing engineer now has to look into the possibilities of recovery of recoverable values and associated minerals in the tailings, old rejects and ore dumps of earlier mining activities.
The Hydrogen Council initiative quotes the global production of hydrogen to be 8 EJ/y (71,600 kNm/h) with a majority of users in refineries ammonia and methanol production. Growth is forecast in these markets and there is a growing body of evidence that to reduce carbon dioxide emissions to acceptable levels, deeper cuts across all sectors will be required. A low carbon gas solution for hydrogen production will be important to achieve the aims of the Paris Protocol and impact on climate change.
Critical to the process is the SMR which generates the majority of the worlds hydrogen1 as shown in the flowsheet in Figure 1. The SMR is a large refractory-lined box containing hundreds of pressurised tubes to convert hydrocarbon as per the reaction above.
To achieve long catalyst lives, the first stage in the process purifies the natural gas by a hydrodesulfurisation step in which the organo-sulfur compounds are reacted over a catalyst with recycled hydrogen to form hydrogen sulfide followed by absorption of that hydrogen sulfide in a zinc oxide bed through its conversion to zinc sulfide.
In an SMR, hydrocarbons present in the feed natural gas are reacted with steam at pressures of between 1m4m Pa in the presence of a nickel catalyst in tubes, which are generally 1214 m in length with a 125 mm inside diameter.
The reaction produces a gas mixture comprising hydrogen, carbon monoxide and carbon dioxide, which is often termed syngas. The steam reforming reaction is endothermic, so heat is added to the process by burning additional natural gas and waste gas streams to heat the catalyst-containing tubes. Typically, the reformer exit temperatures are between 700930C depending on the flowsheet and final product.
The syngas from the steam reformer is further processed over a third and, in some cases, a fourth catalyst stage when the water gas shift reaction takes place as detailed above to maximise hydrogen production. This reaction is exothermic and cooling the gas allows heat recovery back into the process which generally involves steam generation and boiler feed water pre-heating.
SMR flowsheets have been optimised over a number of years and have become the technology of choice to produce hydrogen, with production capacity ranging from MWs to GWs (1330 kNm/h). The designs are now deemed to be a mature technology that provides high purity hydrogen on a reliable basis required for downstream users. However, there are very few hydrogen plants that capture the carbon dioxide as there is no economic or legislative incentive to do so.
Therefore, if hydrogen is to play a role in reducing the impact of climate change2, then it will need to be produced with concomitantly low carbon dioxide emissions, which, when using natural gas as a feedstock implies coupling it with carbon capture and storage (CCS). The combustion process in an SMR produces carbon dioxide at low concentration and atmospheric pressure and hence inevitably leads to high capital cost estimates (CAPEX) solutions for CCS. These include post-combustion capture of the carbon dioxide or using process-produced hydrogen as the fuel in the reformer. Both of these introduce significant efficiency losses.
Accordingly, to produce low carbon hydrogen cost effectively, the products are ideally both high purity hydrogen and also now high pressure and high purity carbon dioxide for CCS. Additionally, consumers of this hydrogen would not want this with an unaffordable cost penalty.
The LCH flowsheet recovers heat at maximum exergy (ie the highest possible quality) which offers efficiency benefits by coupling a gas heated reformer (GHR) with an autothermal reformer (ATR). The main difference between the LCH and SMR flowsheets is that the energy to drive the reaction is provided by introducing oxygen to the ATR as opposed to burning natural gas in the SMR. At the scales envisaged, this oxygen would come from an air separation unit. ATRs are already used in the production of syngas and are part of most modern schemes for production of methanol and liquid fuels from Fischer- Tropsch processes. These plants are very large and demonstrate that the technology is capable of producing hydrogen at large scale and therefore the scale-up risk is minimised.
Purified natural gas is pre-heated and reformed in the GHR before entering the ATR reactor. In the first reaction step in the GHR, 30% of the total hydrocarbon is reformed by reaction with steam to form syngas. In the second stage, the ATR, oxygen is added and combusts some of the partially-reformed gas to raise the process gas temperature to around 1,500C. The resultant gas then passes through a bed of steam reforming catalyst inside the same reactor for further reforming. Since the reaction is limited by equilibrium, operation at high temperature and steam flows minimises the methane content of the product gas which in turn minimises overall carbon dioxide emissions. The hot gas exiting the ATR passes back to the GHR providing the heat necessary to drive the reforming reaction in the GHR tube- side.
For the LCH flowsheet, all of the carbon dioxide is within the product stream and therefore is at high pressure and relatively high purity and can be easily removed using standard industry removal technologies. This has implications in terms of the overall CAPEX of the flowsheet because the size of the carbon dioxide removal system is significantly reduced.
Whilst steam reforming using an SMR requires a large energy input from combustion of natural gas, energy integration means the flowsheet is relatively efficient. In an SMR, the waste heat generated primarily as steam is exported, whilst for an SMR with CCS, the steam is used to provide heat for carbon dioxide recovery and energy for carbon dioxide compression. In the LCH flowsheet, the waste heat is instead recycled to supplement the necessary reaction heat through the GHR, meaning natural gas fuel is not needed, resulting in reduced atmospheric emissions as illustrated in Table 1.
A further advantage of using the LCH technology is that compression energy required for the air separation unit and carbon dioxide compression need not be by steam raising. The LCH flowsheet can power compressors using electrical grid energy sourced from renewable energy sources. The LCH technology therefore can integrate renewable energy and provide a flowsheet with dramatically reduced greenhouse gas emissions, lower capital cost and lower natural gas consumption.
GHRs have run continuously on a commercial basis for over 100 cumulative years proving the concepts on a long term with best in class reliability. The technology has been scaled and in 2016 to facilitate a 5,000 mtpd methanol project in the US. In 2016, JMs technological breakthrough was awarded the top prize at the IChemE Global Awards (Outstanding Achievement in Chemical and Process Engineering).
This is the second article in a series discussing the challenges and opportunities of the hydrogen economy, developed in partnership with IChemE's Clean Energy Special Interest Group. For more entriesvisit the series hub.
You do not have to be a chemical engineer to join IChemE. Our global membership community includes people from a range of disciplines who have an interest in and/or relevant experience in chemical engineering and our membership rates range from a low single fee for students to annual subscriptions for professional Members.Get in Touch with Mechanic