Risk Flags for the Human-AI Scuderia
Various international regulations underline that humans must stay in the discussion making process, as algorithms cannot be held responsible. Nevertheless, also if the final decider is human, the AI may subconsciously non-adequately influence.
Relevant AI biases can include the over-trust in the machine, where humans perceive the machine as being on a higher knowledge level and acting flawlessly. As a consequence, results coming from an algorithm do not get adequately checked and challenged.
Machine-like, the human decider confirms the AI result. To comply with regulations, algorithms do not make decisions, but filter information, rank suggestions and/or show various risk flags.
Such signs could be understood as yellow flags as known from sports. Less as the yellow cards in football (or soccer), as this is less a risk advisor but a kind of lighter punishment, but in Formula 1, as here a yellow flag indicates caution on the track.
When a yellow flag is displayed, drivers must slow down, be prepared to stop if necessary, and not overtake other cars until they have passed the hazard zone.
Yellow flags are typically shown when there is a hazard or obstruction on or near the track, such as a crash, debris, or a car in a dangerous position. This helps ensure the safety of drivers, marshals, and other personnel on the circuit.
If a driver fails to adhere to the yellow flag rules, they may face penalties from race stewards.
The yellow flag can include red stripes, what warns about a slipper track surface. It’s usually waved by marshals to alert drivers of hazardous conditions, such as oil spills, water on the track, or debris, which could make the racing surface treacherous and reduce grip.
When drivers see this flag, they’re expected to exercise caution and adjust their driving accordingly to avoid accidents. Even the way, the flag gets waived has a meaning, as the doubled waved yellow flag is a more serious warning sign.
Seeing these different types of yellow flags requires the driver to slow down, to understand what caused the warning; and then to act adequately. Strategies can include staying out on the track or coming into the pits to change tires or let other small adaptations be done.
In opposite to this, the red flag means that the training, qualification, or race had been stopped. This is not because of punishment, but to protect the drivers or other stakeholders, like track marshals.
Like Formula 1, employees using AI tools must understand risk scores or other symbols as warning flags. This shall not automatically lead to disqualification of options or confirmation of the option with the lowest indicated risk, but humans have the responsibility to consider this, then execute an adequate extensive decision-making process, which could confirm the AI’s consideration, but also come to a different conclusion.
Humans are ethically and legally responsible for their decisions, if they are not up to this, the result could be the black flag: disqualification.