Is the Boeing 737 Max safe enough to fly?
The Boeing 737 Max began flying commercially in May 2017 but has been grounded for over a year and a half following two crashes within five months. On October 29 2018, Lion Air Flight 610 took off from Jakarta. It quickly experienced problems in maintaining altitude, entered into an uncontrollable dive and crashed into the Java Sea about 13 minutes after takeoff. Then on March 10 2019, Ethiopian Airlines Flight 302 from Nairobi suffered similar problems, crashing into the desert around six minutes after leaving the runway.
In total, 346 people lost their lives. After the second crash, US regulator the Federal Aviation Administration (FAA) decided to ground all 737 Max planes, of which around 350 had been delivered at the time, while they investigated the causes of the accidents.
Now, 20 months later, the FAA has announced that it is rescinding this order and has set out steps for the return of the aircraft to commercial service. Brazil has responded quickly, also approving the 737 Max. So, what went wrong – and can we be confident that it has been fixed?
The causes of the two accidents were complex, but link mainly to the 737’s manoeuvring characteristics augmentation system (MCAS), which was introduced to the 737 Max to manage changes in behaviour created by the plane having much larger engines than its predecessors.
There are some important points about the MCAS which we must consider when reviewing the “fixes”. The MCAS prevented stall (a sudden loss of lift due to the angle of the wing) by “pushing” the nose down. Stall is indicated through an angle of attack (AoA) sensor – the 737 Max is fitted with two, but MCAS only used one. If that AoA sensor failed, then the MCAS could activate when it shouldn’t, unnecessarily pushing the nose down. The design meant that there was no automatic switch to the other AoA sensor, and MCAS kept working with the erroneous sensor values. This is what happened in both crashes.
The design of the MCAS meant that it was repeatedly activated if it determined that there was a risk of a stall. This meant that the nose was continually pushed down, making it hard for pilots to keep altitude or climb. The system was also hard to override. In both cases, the flight crews were unable to override the MCAS, although other crews had successfully managed to do so in similar situation, and this contributed to the two accidents.
The fixes
Have these things been fixed? The FAA has published an extensive summary explaining its decision. The MCAS software has been modified and now uses both AoA sensors, not one. The MCAS also now only activates once, rather than multiple times, when a potential stall is signalled by both the AoA sensors. Pilots are provided with an “AoA disagree warning” which indicates that there might be an erroneous activation of MCAS. This warning was not standard equipment at the time of the two accidents – it had to be purchased by airlines as an option.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
Importantly, pilots will now be trained on the operation of the MCAS and management of its problems. Pilots claimed that initially they were not even told that MCAS existed. This training will have to be approved by the FAA.
So, is all well? Probably. As the 737 Max accidents put Boeing and the FAA under such intense scrutiny, it is likely that the design and safety activities have been carried out and checked to the maximum extent possible. There is no such thing as perfection in such complex engineering processes, but it is clear that this has been an extremely intensive effort and that Boeing found and corrected a few other potential safety problems that were unrelated to the accidents.
Of course, we are not there yet. The more than 300 aircraft already delivered have to be modified, and the 450-or-so built but not delivered also need to be updated and checked by the FAA. Then the pilots need to be trained. And the airlines need passengers – but will they get them? That is an issue of trust.
Safety culture and trust
The US Congressional Enquiry was scathing about the culture at both Boeing and the FAA and the difficulty of the FAA in overseeing Boeing’s work. Some commentators have also referred to an absence of psychological safety: “The assurance that one can speak up, offer ideas, point out problems, or deliver bad news without fear of retribution.” We have evidence that the engineering problems have been fixed, but safety culture is more nebulous and slow to change.
How would we know if trust has been restored? There are several possible indicators.
Due to the effects of COVID-19, airlines are running a reduced flight schedule, so they may not need to use the 737 Max. If they choose not to do so, despite its reduced operating costs compared to earlier 737 models, that will be telling. Certainly, all eyes will be on the first airline to return the aircraft to the skies.
Some US airlines have said they will advise people which model of aircraft they will be flying. If passengers opt to avoid the 737 Max, that will speak volumes about public trust and confidence.
The FAA press release also says there has been an “unprecedented level of collaborative and independent reviews by aviation authorities around the world”. But if the international authorities ask for further checks or delay the reintroduction of the aircraft in their jurisdictions, that will be particularly significant as it reflects the view of the FAA’s professional peers. Brazil’s rapid response is a positive sign for this international engagement.
Hopefully, the first few years will prove uneventful and trust can be rebuilt. But only time will tell.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook and Twitter. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.
John McDermid became professor of Software Engineering at the University of York in the U.K. in 1987. His research covers a broad range of issues in systems, software and safety engineering. He became director of the Lloyd’s Register Foundation funded Assuring Autonomy International Programme in 2018, focusing on safety of robotics and autonomous systems. He is an advisor to government and industry, including FiveAI, the U.K. Ministry of Defence and aerospace companies including Rolls-Royce. He is author or editor of six books and has published about 400 papers. He is a visiting professor at Beijing Jiaotong University.