Accurate pressure measurement is crucial to developing an electric oil pump

Accurate pressure measurement is crucial to developing an electric oil pump

Driven by escalating global emissions targets, OEMs are increasingly turning to electrification to reduce fuel consumption and Greenhouse Gas emissions. A popular choice in this regard is the hybrid electric vehicle, often powered by a severely downsized engine.

The problem with these downsized engines is that power-sapping auxiliary systems severely impair drivability and performance. Fortunately these parasitic losses can be significantly reduced by replacing traditionally mechanical components with electrically driven units. Because of this, electrically powered pumps are rapidly finding their way into series production; particularly driving oil and water pumps.

Image 1: Example of an electric oil pump
Image Source: Rheinmetall Automotive

But while the benefits are obvious, electrifying, in particular the oil pump, is technically complex: Engineers, not only wish to circulate the oil at a particular flow rate and pressure, but would like to intelligently match these to the engine requirements.

In order to optimize the performance it’s important that friction and pumping losses are minimized through careful control of the oil flow into different branches of the oil circuit while ensuring the correct pressure is available at all times.

Simulation relies on accurate testbed oil pressure and flow rate information

An electrically powered oil pump is made up of three subsystems – the pump, motor and electronic controller. Therefore the primary challenge of any new application development is the efficient integration of these modules so as to reduce the overall size and weight as well as the number of components, whilst optimizing performance.

The main function of the oil pump is to deliver a specified oil flow at an optimal pressure. For this reason, its design, which is an iterative process, starts with the ‘pumping gears’. For most applications the pump is required to deliver pressures in excess of 1 to 2 bar, often going as high as 10 bar.

As in most engine developments, a combination of simulation and real world testing is used to speed up the design. 

The design loop begins with the preliminary assessment of the volumetric efficiency based on experimental results collected on similar pumps and applications. These include pump speed, oil temperature, pressure and flow rate. 

It’s important that the information used for the estimation is accurate, therefore the data collection must be carried out using highly dependable, precise measuring equipment that can deliver accurate readings under the extreme conditions encountered in and around the engine.

To ensure accuracy and repeatability it’s important that only the best quality sensors are used when measuring the pressure. Not only must these pressure sensors provide reliable readings across a wide range of pressures and temperatures, but they must also withstand vibration.

Over many years STS have developed pressure sensors that meet OEM, first tier and specialist engine designers’ requirements in engine development. 

Developing an electric oil pump that outperforms the mechanical unit 

Armed with the information gathered on the hydraulic requirements at various flow rates, delivery pressures and oil temperatures, a preliminary design of the gears is finalized. Using Matlab’s  Simulink software, the information regarding the behavior of the physical system can be rationalized into an one-dimensional code. 

At this stage it’s important to note that to generate the required flow at a specified pressure, a rotational speed should be selected that facilitates the best packaging of motor and pump without creating cavitation or noise issues: Thus a typical speed range for continuous operation is usually between 1500 and 3500 rpm. 

In the next step, several designs can be generated using LMS Imagine. Lab’s Amesim software that optimizes the design parameters – for example the number of teeth and eccentricity, while satisfying all pressure, flow and temperature boundary conditions. 

After implementing the geometrical features of the calculated hydraulics and the interim design has been finalized the total torque required to drive the pump at critical working points can be calculated as follows: 

Mtot = MH + MCL + Mμ 

Where:

  • MH is the hydraulic torque due to the generation of required pressure and flow
  • MCL is the coulombian contribution generated where there are dry or lubricated contacts between sliding parts
  • Mμ is the viscous contribution due to the fluid movement inside clearances.

Once the design is completed, engineering prototypes are constructed for real world evaluation on an engine testbed. 

Once again oil pressure, flow rate and temperature are measured at various engine and pump speeds to validate the results obtained through simulation. If the results meet the specifications the development program is finalized and the project enters the industrialization phase. 

For optimal performance and durability it’s obvious that all measurements be accurately recorded, but the weight given to information generated by the pressure sensor possibly outweighs all others – insufficient pressure at any point can lead to a catastrophic failure; while excessive pressure wastes energy and could lead to problems with the oil seals.

Pressure unlocks Compressed Natural Gas’ potential

Pressure unlocks Compressed Natural Gas’ potential

Thanks to its very high energy density, compressed natural gas (CNG) is well suited for use as an automotive fuel. CNG has an octane number of approximately 120 and combustion heat of 9,000 to 11,000 kcal/ kg or 38 to 47 MJ/ kg.

In addition, the combustion of CNG produces significantly less CO2 emissions than does the combustion of gasoline, for example. And because CNG is a particularly cost-effective fuel in many markets, manufacturers are showing a growing interest in developing vehicles that are capable of running on this alternative fuel source.

The primary challenge in optimizing an Internal Combustion Engine to run on CNG is regulating the injection pressure in the fuel rail.

Image 1: Example of a two fuel system for gasoline and CNG
Image Source: Bosch Mobility Solutions

CNG is stored at approximately 200 Bar, and is commonly injected at between two to nine Bar, depending on the engine requirements – low pressure for fuel efficient driving in the lower speed ranges, and higher pressures when greater power and torque are required.

The effectiveness of combustion within an engine’s cylinder is strongly influenced by the temperature and pressure of the CNG: An increase in pressure at constant volume will result in higher mass density of the gas, thus increasing its heating value.

However, even though the initial temperature and injection pressure can be varied, if not accurately calibrated during development, compressed natural gas vehicles can suffer from power loss and poor drivability.

Injecting CNG under pressure

Typically, CNG is fed from a high-pressure tank via a pressure regulator to the fuel rail. For efficient fuel combustion, the amount of natural gas injected must always be matched to the mass of air required by the engine. To achieve this the electronic engine management typically employs an air-flow meter to determine the exact amount of air required and subsequently the quantity of CNG to be injected.

With central point injection (CPI), the CNG is fed from a natural-gas distributor (NGD) into the intake manifold. A medium-pressure sensor measures the pressure and temperature in the NGD, allowing the natural-gas injectors to deliver the precise amount of fuel required.

Alternatively, the injection can also be implemented without the NGD, by aligning each injector with a corresponding cylinder. With this multi-point injection (MPI), the gas is injected under pressure at each cylinder’s intake manifold ‘runner,’ upstream of the intake valve.

Because changes in pressure have a significant influence on the engine’s performance when running on CNG fuel, engine torque and exhaust emissions (CO, CO2, NOx and hydrocarbons) all have to be recorded during engine testing.

Optimizing rail pressure for all driving conditions

To optimize the CNG system it’s important that during the design and testing phases the pressure within the rail is accurately measured at various throttle openings and cross referenced to engine torque and the corresponding exhaust gas emissions. Consequently high quality pressure sensors are demanded by most development engineers.

It’s important that these sensors deliver accurate readings across a wide range of pressures, while retaining their integrity at elevated temperatures.

Although an increase in CNG pressure reduces CO2, HC and NOx, CO in the exhaust gas increases, making it vital to accurately record the effects of modulating the CNG injection pressure.

During testing a pressure regulator is used to control injection pressure which is measured by an accurately calibrated pressure sensor located in the rail, while an analog flow meter, typically with a capacity of 2.5 m3/ h,is used to measure and control the inlet air flow rate. A chassis dynamometer is used to record engine torque.

For the duration of the test, gas temperature and flow rate are kept constant at 22°C and 0.1 SCFH, respectively. A high power blower is used to maintain engine temperature during the test, and emission test equipment is attached to the exhaust outlet to record CO, CO2, hydrocarbons and NOx content in the exhaust gases.

The process is quite complex and requires rail pressure, torque and emissions to be measured at hundreds of throttle opening points in order to create an effective map of the engine’s requirements for the engine ECU.

Measuring, recording and inputting all this data into the relevant tables is a time consuming task, therefore development engineers often turn to modeling tools to fast-track development. These tools commonly provide an environment for simulation and model-based design for dynamic and embedded systems, thereby reducing the number of hardware versions required to design the system.

The simulation model is coded with the information gained from the real-time testing and then built into an executable using C compiler to run on a real time operating system.

Once the baseline data is captured it’s possible to generate an infinite number of real-time simulations to be applied to any facet of the design cycle – from initial concept, to controller design, test and validation using hardware-in-loop (HIL) testing.

A well-developed test program using laboratory grade pressure sensors and test equipment unleashes performance and drivability from CNG fueled vehicles that is comparable to fossil fueled equivalents, while delivering cost and emissions benefits.

Assured leak testing by relative and absolute pressure methods

Assured leak testing by relative and absolute pressure methods

Leakages can have fatal consequences. To efficiently design production processes and to prevent costly and image-tarnishing recalls, components need to be tested early within the manufacturing process. Leak testing, for this reason, plays an important role in quality management.

The verification of seal integrity and the detection of leakages is an integral element of quality assurance across various sectors. Additionally, an early recognition of faulty parts during the manufacturing process can avoid unnecessary costs. Areas of application here include the testing of individual components, as well as complete systems either in serial production or within a laboratory environment. The sectors in question range from the auto industry (cylinder heads, transmissions, valves etc.) and medical engineering, right through to the plastics, packaging and cosmetics industries.

The German company ZELTWANGER Dichtheits- und Funktionsprüfsysteme GmbH is one of the most distinguished manufacturers of high-performance leakage testers. Depending upon the specific application, a range of leak testing procedures are optional, including the relative and absolute pressure methods.

Leak testing by the relative and absolute pressure methods

The relative or absolute pressure processes deliver the following decisive advantages:

  • compact test setup of small tare volume
  • high operating safety
  • extended measurement range
  • automation optional

During these procedures, the test item is subjected to a defined pressure. To be measured and analyzed over a set time is the pressure differential resulting from a leakage. In the relative pressure method, the difference to ambient pressure is decisive. When the test pressure is higher than ambient pressure, then we speak of overpressure testing. The terms negative pressure or vacuum testing then apply when test pressures register lower than ambient pressure. By the absolute pressure method, the pressure is determined relative to absolute vacuum.

When leak testing by either the relative or absolute procedures, ZELTWANGER also employs pressure measurement cells made by STS. The demands upon the technologies applied are rigorous, essential being:

  • outstanding signal processing
  • flexible pressure ranges
  • varying measurement methods (differential, relative and absolute pressures)
  • outstanding reliability

The ATM pressure sensor from STS meets these required specifications with its broad pressure range of 100 mbar to 1,000 bar and an accuracy of ≤ ± 0.10 %FS. But apart from these figures, its fail-safe ability and extremely good signal processing also represent crucial features. The modularity of STS sensors even offer manufacturers the option of straight forwardly integrating them for their own internal applications.

STS pressure transmitters, along with self-developed sensors from ZELTWANGER, are already integral to devices of the ZED series. These excel for both their versatility and precision. The ZEDbase+ device reliably measures, for example, relative, and differential pressures, as well as mass flow. Recorded test pressures, depending upon testing method, have ranged from vacuum to 16 bar. With relative pressure, even the slightest of pressure shifts from 0.5 Pa to 4 Pa can be detected. Besides these technical requisites, further decisive arguments in favor of STS are a reliable supply status, coupled with flexible and uncomplicated customer support – not to mention a major common ground between both of the companies involved. Our collective aim is always to provide customers with tailored solutions which exactly fulfill their exacting specifications.

Could a high-pressure direct-injection hydrogen engine replace the turbodiesel?

Could a high-pressure direct-injection hydrogen engine replace the turbodiesel?

Having fallen from grace, the once iconic diesel power unit appears to have run its course. Even cities, such as Paris, that once incentivized the use of diesel are now calling for OEMs to stop production by 2025. Although this is highly unlikely to happen, it is an expression of the world’s concerns over global warming and air pollution in general.

To meet ever tightening emissions regulations OEMs are studying new and often untried forms of propulsion: Everything from full electrification to hybrids and even hydrogen fuel cells are being tested as possible solutions.

Hydrogen in particular is piquing the interest of researchers around the world – it’s hailed as a clean burning fuel that could very well end up powering the transport of the future.

The difference between hydrogen and conventional hydrocarbons lies in its wide stoichiometric range from 4 to 75 percent by volume hydrogen to air, and under ideal conditions the burning velocity of hydrogen can reach some hundred meters per second. These characteristics make it highly efficient when burning lean mixtures with low NOx emissions.

Forty years of hydrogen injection

Hydrogen injection has been around since the 1970s and works by injecting hydrogen into a modified, internal combustion engine, which allows the engine to burn cleaner with more power and lower emissions.

Earlier low pressure systems, which are still in use today, injected the hydrogen into the air prior to entering the combustion chamber. But with hydrogen burning 10 times faster than diesel and, once mixed with the diesel in the combustion chamber, increasing the burn rate several problems have been experienced. The most significant being:

  • Light-back of the gas in the manifold
  • Preignition and/or autoignition.

The best way to overcome these problems is to fit a high-pressure direct injection system that provides fuel injection late in the compression stroke.

Optimizing the combustion process through accurate pressure measurement

In order to do this the injection needs to be accurately mapped to the engine. This can only be accomplished through gathering test data regarding temperature (manifold, EGT and coolant), pressure (cylinder/ boost, line and injector), the turbulence in the manifold and combustion chamber, and the gas composition.

The mixture formation, the ignition and the burning processes are commonly studied through two different sets of experiments. The aim of the first experiment is to obtain information about the highly transient concentration and distribution of hydrogen during the injection process.

During this test a Laser-Induced Fluorescence (LIF) on tracer molecules is used as the primary measurement technique to study the behavior of the hydrogen under compression and ignition. Using a constant volume combustion chamber (CVCC) with the same dimensions as the actual C.I. engine, implying that the volume in the CVCC equals the volume in the cylinder at the top dead center, pressurized hydrogen is injected into the cold pressurized air through a hydraulically controlled needle valve.

Using high quality pressure sensors, the effect of various injection pressures on the combustion process can be studied. By observing the behavior and volume of unburned gas, the time taken to optimize the injection pressure for a specific number and position of injector nozzle holes and also the injection direction is drastically reduced.

And using unique software the ignition delay, which is dependent on the temperature and the concentration of hydrogen in air at a given pressure can be determined. Once again, it’s important that the pressure readings are accurately recorded, across a range of pressures that vary between 10 to 30 MPa.

Furthermore, this method allows for the definition of areas of the injection jet where self-ignition conditions exist, which is useful for the development of an optimized injection system for engines to be converted from diesel fuel to hydrogen.

In recent tests carried out by a premium brand OEM,the optimized high pressure hydrogen injected engine showed a promising increase in specific power while reducing fuel consumption and achieving 42% efficiency – values that match the best turbodiesel engines.

Based on the findings it would certainly appear as if work carried out on optimizing the pressure of these 30 MPa systems may in fact offer another source of clean energy for future transport.

Measuring the heartbeat of the IC engine

Measuring the heartbeat of the IC engine

As a doctor measures blood pressure to determine the health of a patient, so too, the development engineer measures crankcase pressure to gain an insight into the condition of an engine on the test-bed. Not only does an increase in pressure provide an early indication of wear, but pressure measurement is crucial in the development of modern positive crankcase ventilation systems, that need to comply with emissions regulations.

It’s important to note that the measurement of crankcase pressure is not a direct measurement of “Blowby”, which is measured as a flowrate in standard cubic meters per second.

Measuring crankcase pressure to monitor cylinder liner, piston and ring wear.

Development engines are not cheap, taken that there’s usually an intensive engineering design program behind them: Therefore, the last thing any engineer wants to see is the test literally go up in smoke. To minimize the risk, testbeds nowadays are instrumented with a myriad of sensors to monitor everything from oil pressure and ambient temperature to EGTs and of particular interest, crankcase pressure.

Crankcase pressure sensors used on testbeds are particularly interesting as, not only are they capable of measuring relatively minor variances in pressure, but they are also stable across a wide temperature range whilst withstanding submersion in hot oil: This is particularly important as the sensor is often fitted to the sump or oil filler tube where it comes into direct contact with hot engine oil.

The piston-rings-cylinder (PRC) system is subjected to extreme stresses such as high frictional and accelerative forces, as well as extreme temperatures and pressures resulting from the combustion process.

Under these conditions there will always be some form of scavenging back into the crankcase, but as component wear increases, so will the pressure inside the engine. This is the basic principle behind measuring crankcase pressure as an early indication of wear on engines running on dynamometers or testbeds.

This increase of pressure in the crankcase in forced induction CI engines can be catastrophic, as the return of oil from the compressor will often be restricted resulting in the labyrinth seal failing causing a total loss of lubrication to the bearings.

Notwithstanding the importance of monitoring the PRC system’s condition, optimizing positive crankcase ventilation through accurate measurement of internal pressure is vital in meeting emissions legislation.

Designing the PCV for a cleaner environment.

In the early 1960s, General Motors identified crankcase gasses as a source of hydrocarbon emissions. They developed the PCV valve in an effort to help curb these emissions. This was the first real emissions control device fitted to a vehicle.

Ideally, the crankcase pressure should be controlled to just above atmospheric so that there’s enough pressure to exclude dust and moisture, but not enough to force oil past seals and gaskets; or on a forced induction engine, restrict the return of oil to the sump.

The first step in the design of an effective PCV valve is to determine the actual pressure in the crankcase by using a high quality pressure sensor specifically designed to accurately measure small differentials, whilst providing accurate repeatable readings across a wide temperature range.

Armed with the data accumulated during performance and durability runs, engineers are able to determine the appropriate parameters for the PCV valve:

  • Suitable cross sectional area to facilitate sufficient vapour flow from the crankcase
  • Correct operating pressure parameters to ensure unrestricted oil return on turbocharged engines, whilst retaining positive internal pressure.

Finally the prototype valve is evaluated on a testbed, again with crankcase pressure sensors fitted, to confirm performance and durability, as well as emissions compliance.

This development can span weeks and account for a sizeable chunk of the development bill, so the last thing a manufacturer would want is the failure of a vital sensor; which would require a partial, or even complete retest. That’s why OEMs only use high quality pressure transmitter, such as those produced by the pressure transmitter and transducer manufacturer STS.

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!