Article
Articles, Issue 45 - Summer 2012

Powering Ahead

Adaptive energy management on the Evora 414E Hybrid.

An important factor in hybrid vehicle ownership is the running cost, Lotus Engineering has been working on a cost-based adaptive energy-management control strategy for the 414E

Hybrid electric powertrain of the 414E

Introduction

The series-hybrid powertrain architecture on the Evora 414E Hybrid allows some interesting optimisation work to be performed, giving the vehicle an ability to minimise its own fuel consumption in response to electrical vehicle power demands. To manage energy flow between the battery, range-extender and vehicle loads, an adaptive energy management technique has been developed where the arbitration of power flow is derived by evaluating an instantaneous “fuel equivalent cost” of the range-extender and battery. The energy manager calculates the average vehicle power demand over a series of trailing time windows and evaluates instantaneous cost functions before determining the feed forward range extender operating point. The aim of the energy management system (EMS) is to determine and apply the best ratio of engine-to-battery power, in order to minimise fuel consumption. The EMS must consider a number of different engine powers in order to form a comparison and decide on the optimal powertrain state. Options of engine power are formalised as average vehicle power demands, calculated across trailing time windows of 1-20 seconds, as well as the instantaneous power demand.

Vehicle power demand is measured directly from the DC bus that delivers energy to every part of the vehicle system. Measuring total power demand this way ensures that the EMS not only optimises road loads, but additionally the auxiliary loads demanded by the complete vehicle system.

Fig 1. Trailing Window Average Power Demands

Figure 1 shows trailing-average power demands with time over a section of the NEDC drivecycle. At each time step, the EMS takes a vertical slice through the trailing-average ribbons and evaluates the cost of running the engine at the power level stored in each average buffer. If the average power stored in the buffer is not equal to the instantaneous vehicle demand, a voltage difference in the circuit will cause the battery to passively absorb or dissipate energy, so the DC bus always delivers the exact power that the vehicle requires. The combination of engine and battery power corresponding to minimal fuel consumption is then applied in a feed-forward control approach.

Engine Cost

Because the range-extender engine is not mechanically coupled to the driven wheels in the series-hybrid architecture, its speed & torque does not have to be correlated with the speed & torque required by the driven wheels. Instead, we can operate the range-extender over the speed-torque locus that results in minimum fuel-burn per unit of energy. This is found by combining the efficiency maps of the engine and generator, and tracing power contours until a maximum efficiency point is found for that specific power output. The fuel-equivalent cost of the auxiliary power unit (APU) power is calculated from the efficiency map in

Fig 2. Optimal APU Operating Locus (Best Fuel for Power)

Figure 2 and the calorific value of the fuel. A transient correction factor is included to account for extra fuel consumption during transient power demands.

Battery Cost

The cost of applying battery power is evaluated in terms of fuel used to charge the battery, as well as efficiency losses from the battery internal resistance (ohmic resistance). At an instance when the battery may be charged completely or partially from the excess range-extender power, the portion of fuel being used to charge the battery is amplified by the efficiency loss while the battery is charging, and again by a predicted loss for when that energy is drawn out of the battery later on. Ohmic-resistance losses are proportional to the rate of power draw from the battery.. Because we don’t know the rate at which the battery will be depleted in the future, we project an average battery power draw, accumulated from the beginning of every trip. As we reset the average battery draw value at the beginning of every trip, the system reliably optimises over varying driver behaviour – if you’re driving particularly aggressively one day, the system responds to power demand over a short time horizon and is not tuned with a single set of fixed parameters that capture the “average” driver behaviour. This gives a more adaptive, locally-relevant optimisation than conventional approaches.

Fig 3. SoC Trajectory Over Repeated NEDC Cycles

The fact the overwhelming majority of trips are initiated when state of charge (SoC) is high enough to facilitate electric-only operation also means that the driver behaviour is usually captured adequately by the system before the APU is engaged. This negates a lengthy “learning” period initiating when the range-extender turns on for the first time in a trip. In the instance of a battery discharge event, the amount of fuel used to charge the battery up to this point is known, so the round-trip efficiency of energy from fuel used in the battery cycle is based on a total knowledge, and prediction of future parameters are not required as in the battery-charging case. Battery discharge is scaled by a term, , which is the fraction of all energy stored in the battery at any time that has been provided by the range-extender engine, i.e. from the combustion of fuel. If the battery contained only energy that has been provided by grid-electricity during a plug-in charge,  would equal zero. Throughout a long drive, the engine will top-up the battery and  will increase. As this happens, the perceived fuel-equivalent cost of discharging the battery will also increase. As a result, the rate of battery discharge that the energy management system deems to be optimal will decrease, and the battery will eventually reach a stable state-of-charge, where the perceived cost of charging is roughly equivalent to the cost of discharging. Many other systems scale the battery cost based on how far the current battery state-of-charge is from a predetermined target, where battery cost is perceived as a negative quantity when SoC is above this target. We choose to use the fuel-portion  instead, as it consistently provides a cost which is relevant to the variable we are aiming to minimise, i.e. fuel consumption. We aim for a target SoC to be maintained in charge-sustaining operation, although the system will allow the battery to deplete further if the global cost function shows it is optimal to do so. The battery is also allowed to discharge in accordance with the instantaneous kinetic energy of the vehicle, because we know that when the vehicle decelerates, generally a consistent amount of energy will be recuperated (regenerative or regen braking). We observe a fluctuation of about 2-3% in battery state of charge over a single charge sustaining cycle. Physical hard-limits are enforced in the controller too and these are based on empirical data from either the cell manufacturer or independent test houses. Each cell has to be operated within certain parameters, for example Voltage. If the voltage of a cell is allowed to drop too far, an irreversible chemical reaction takes place, and it may not be possible to charge the cell ever again. Depending on how the battery pack is designed and constructed, this renders the pack at the very least ineffective and possibly unusable without cell replacement.

Comparison with other systems

To highlight the benefits of the new adaptive energy management system, we can compare performance to that provided by a simple stop-start strategy and a load-following strategy. In the stop-start strategy, the APU runs at a fixed point – its peak efficiency load-point (which usually does not correspond with peak battery charge efficiency), charging the battery until an upper threshold SoC is reached.

Fig 4. SoC Trajectory for Three Charge-sustaining Stategies

Once the battery is sufficiently charged, the APU powers off and the vehicle runs on the battery alone until a lower threshold SoC is achieved, and the process repeats. This results in the battery SoC following an aggressive, fast acting saw tooth shaped cycle with time. The load following strategy runs the APU at a power level equal to the instantaneous powertrain demand. The battery charges up gradually as regenerative braking energy is accumulated until an upper threshold SoC is reached, at which point the APU powers off and the battery is used down to a lower threshold. The cycle repeats less frequently than with the stop-start strategy, but the large swings in battery state of charge are still prominent in the graph, see Figure 4. The SoC trace produced by the Lotus adaptive EMS has a more gentle flowing nature and shows smaller swings in SoC with time. The EMS carved out its own SoC trajectory based on the most efficient action of the options posed to it, and it is a coincidence that this SoC profile causes less degradation to the battery.  This is not to be overlooked as one of the largest perceived issues to the consumer is battery lifetime. Battery degradation operates on a similar principle to mechanical fatigue, where a small number of low-magnitude cycles cause less damage than large number of high-magnitude swings. The degradation manifests itself as a reduction in capacity and the automotive industry has defined a battery as being not fit for purpose when the capacity has degraded to 80% of its value when new. At this point, the battery is still useable and there will become a time when there is an industry of ‘second use’ batteries, where they are removed from vehicles and used, for example as stationary energy storage.

Table 1. Weighted CO2 Emissions for Different Strategies

Performance of  the adaptive energy management system, in terms of  fuel consumption and linearly-related CO2 emissions are impressive when compared to performance of the stop-start and load-following methods. The adaptive system used 0.26% more fuel than the stop-start system on a simulated NEDC drivecycle as shown in Table 1. On inspection of

Fig 5. Comparison of Stop-start and

Figure 5, we can see that CO2 accumulates rapidly with a stop-start strategy during battery-charging, and then descends during the battery-discharge. As a consequence, it is possible for total CO2 to be low at the end of a trip with a stop-start controller, although there is an element of luck involved – the stop-start parameters are non-adaptive and hence the end state is not controllable. It is, infact, more likely that a trip utilising the stop-start controller will end with a very high CO2 output and a battery having more energy stored in it than it had at the beginning of the trip. The adaptive method, however, provides consistently optimal fuel consumption and emissions.

When weighted with statistical distributions of  trip distances, the probability of the adaptive system burning less fuel than the stop-start system is greater than 98%. This is because the adaptive system is favourable over shorter trip durations and short trips are far more frequent than very long trips.

 Authors: Adam Chapman and Phil Barker

About lotusproactive

Lotus proActive is an e-magazine published quarterly by Lotus Engineering, covering engineering articles, industry news and articles from within Group Lotus (Cars, Engineering, Originals and Racing).

Discussion

Trackbacks/Pingbacks

  1. Pingback: Evora 414E Updates – Autocar first drive and photo gallery | SELOC - October 3, 2012

%d bloggers like this: