Jump to content


Photo

Flight Tj610 Crashed In The Sea.

lion Sumatra

  • Please log in to reply
330 replies to this topic

#321 Stuart Galbraith

Stuart Galbraith

    Just Another Salisbury Tourist

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 58,876 posts

Posted 21 June 2020 - 0158 AM

Yes, but look at it from the ATC point of view. Yes, the pilot has avoided a potentially dangerous landing. For the ATC, the danger is integrating the aircraft back in the stack, which in crisis periods I would guess is not altogether straightforward.

 

Bad practice of course, but its no worse than pilot getthereitis which was an industry wide problem for many years. Still is occasionally.


  • 0

#322 RETAC21

RETAC21

    A la lealtad y al valor

  • Members
  • PipPip
  • 14,309 posts

Posted 21 June 2020 - 0458 AM

I am hopeful the pressures affecting pilot decision making in the St. Maarten ditching are less relevant to today. If they are not, 50 years is a long time to wait for remediation of them.

 

Still surprised that there is actually a system in place that enables ATC stigmatization of pilots trying to land. As if the pressure of landing a planeload of souls was not enough. 

 

Much depends on the culture in which the pilots are brought up. Sure, you pick up a Dutch crew today and it's completely different from the 70s. With Pakistan, not so sure, as seniority carries a gravitas that junior officers would be loath to challenge, and that then carries over as the FO makes Captain - and that's hard to break:

 

"In his 24-year carrier in airline industry, Captain Sajjad Gul had an experience of flying planes for more than 17,000 hours. He flew A320 airbus for 4,700 hours. "

 

For the FO:

 

"- He came from a very humble background. Both his parents are ill. He was quite religious and would often go for preaching too. Would never gain an undue advantage due to his job, etc.
- He was 33, unmarried, and joined PIA around 2011-12.
- First was qualification, completing hours, etc. The total time flying has been around 7 years. For the first 3-4 years, he flew only domestic. Later 3-4 years, he started international flights."
 
So it could very easily go like this, FO makes a dumb mistake, and delays the point at which they start their descent, the Captain takes over and tries to save the day - FO sees they are on an unstabilised approach, but hopes the Capt knows what he's doing, until it's too late...

  • 0

#323 Nobu

Nobu

    Crew

  • Members
  • PipPip
  • 4,491 posts

Posted 22 June 2020 - 0128 AM

It feels like a system built on good enough, and left to remain that way. I can understand the difficulty from the ATC perspective, but in the communcations I have just started to listen to between ATC and pilots, the voices cracking with stress are usually in the cockpit, not the tower.

 

For some reason, oblivious frequent coach flyer ignorance probably, I thought ATC's relationship with scared pilots trying to land was a symbiotic one.

 

100 years from now, people will look back on the way we did it now, and wonder.


  • 0

#324 Nobu

Nobu

    Crew

  • Members
  • PipPip
  • 4,491 posts

Posted 22 June 2020 - 0140 AM

So it could very easily go like this, FO makes a dumb mistake, and delays the point at which they start their descent, the Captain takes over and tries to save the day - FO sees they are on an unstabilised approach, but hopes the Capt knows what he's doing, until it's too late...

 

It certainly sounds possible. It also is tragic if that was what happened. I wish they had made it.


  • 0

#325 Brasidas

Brasidas

    Member

  • Members
  • PipPip
  • 12,702 posts

Posted 22 June 2020 - 0348 AM

There was an FAA stop ship on Collins FMSs (which did not apply to Fusion), but Collins put some limitations together with some proposed software changes that disable temp comp to get the FAA to get the agency knee off of their neck.
 

https://www.rockwell...ifications.aspx

 

Essentially it's two work arounds, "perform manual temp comp", and don't edit CA leg altitudes.


  • 0

#326 DB

DB

    Crew

  • Members
  • PipPip
  • 11,699 posts

Posted 23 June 2020 - 0816 AM

"Can't you fix it in wetware?"

Unresolved critical software failure modes would, in my opinion, cast a significant doubt about a company's ability to make a design assurance integrity argument, but hey, I don't have to get FMS certified, so it's easy to criticise.
  • 0

#327 Brasidas

Brasidas

    Member

  • Members
  • PipPip
  • 12,702 posts

Posted 23 June 2020 - 0827 AM

"Can't you fix it in wetware?"

Unresolved critical software failure modes would, in my opinion, cast a significant doubt about a company's ability to make a design assurance integrity argument, but hey, I don't have to get FMS certified, so it's easy to criticise.

 

Well, it's kind of like trying to design a plane so that it won't ever fly into mountains. You can, but should you have to?
 

Regardless, the vendor here has had their second stop ship on a major nav function in less than a year. Rumor going around is the agencies are getting ready to do a huge collective software audit.


  • 0

#328 Brasidas

Brasidas

    Member

  • Members
  • PipPip
  • 12,702 posts

Posted 25 June 2020 - 1037 AM

Just got an email from a colleague of mine who is a software configuration specialist, sorry for the long post.
 

"Excerpt from Flight Safety Information Daily and book by Captain Shem Malmquist – interesting reading.  In these overlapping critical systems is DAL A enough? Will ARP compliance become the pre-requisite for the implementation of new avionics suites on older, in-service aircraft? 

 

A NEW APPROACH TO FLIGHT AUTOMATION

 

By Captain Shem Malmquist

Every new jet relies heavily on state of the art computerized systems. Automation has been the "name of the game" for several decades now as designers layer on new software and hardware. Pilots are now accustomed to operating aircraft that contain flight automation that manages nearly every aspect of flight. This has led to well known phrases such as "the children of the magenta line," referring to pilots who are focused on just following the automation, and more technical terms such as "automation dependency" and similar concepts. We all know what these terms mean but are they accurate? 

 

Are pilots handicapped by a lack of basic stick and rudder skills or is something else afoot? Certainly, nobody would argue that stick and rudder skills do not become weaker as automation ramps up, but is that really the problem that is leading "automation dependent" pilots to loss of control events?

 

As a training pilot and accident investigator I do not see pilots that are unable to fly the airplane. What I do see is pilots that keep messing with the automation in an attempt to "fix it" until it does what they want (hopefully). It could be that they realize they grabbed the wrong knob, for example, the airspeed instead of the heading, in response to an ATC clearance, or, for whatever reason, the autopilot was not intercepting an altitude that was set. In the process they get distracted, lose focus and end up in unexpected scenarios. The key here is really ensuring strict discipline. The pilot-flying must focus on flying the airplane. The pilot-monitoring must ensure the pilot-flying is doing what they are supposed to do. The pilot-monitoring is, literally, the "control" (to use system theory parlance) for the pilot-flying. My suggestion would be to teach both pilots to focus only on the aircraft path during any dynamic situation. Any changes to configuration, turns, initial climb and descents, altitude capture, etc., should involve both pilots focusing on flight instruments. Further, if something is not going as expected, immediately degrade the automation to a point where both pilots know with certainty what it will be doing next. That may involve turning off all automation.

 

That brings us to the second problem no one is talking about. As we've learned from the two Boeing 737 Max accidents, pilots must have a fundamental idea of what computers are really doing, and what they are not doing. We tend to think of computers integrated into our aircraft as another hardware component. Either it does the job, or it has failed. Unlike an analog braking system or an altimeter, computers are fundamentally different

 

Attempts to personalize computers with such terms as "machine intelligence" misses the point. Computers are not living, but they do interact with the world around them as they were designed to do. They are hooked up to sensors to "sense" the factors that the people who designed them deemed important and react the way the designers enabled and designed them to do. The computer might, therefore, be missing vital information critical to an alarming scenario simply because beyond the designer's imagination. 

 

Assuming that the data is all being collected as designed, it flows into the computer which uses a "process model" to decide what actions to take. The programmer has attached the computer output to various systems the computer is, literally, controlling. Unlike a living organism, a computer is totally unable to deviate from its programming. It cannot come up with a new or novel solution, it just simply follows its instructions. "I know xyz" and based on values of xyz I perform abc and that is all. There is no nuance here. Depending on the challenge at hand this can be useful or it can create new problems. 

 

One way problems can begin is when the data is accurate but the computer process model is flawed. This can be analogous to a person being trained to do the wrong thing. An example might be a person who is not following a checklist because they had an instructor insist that they followed that instructor's "technique" instead.

 

Problems can also begin when the data coming in is flawed. Here the computer does exactly what it was programmed to do, no more, no less. If the designer anticipated the exact data problem the computer should do something rational. For instance, it might stop all further actions and alert the pilot that it cannot do its assigned job. However, if the designer did not anticipate a problem then there is no way for the pilot, in real time, to be certain of what the computer might now do. Yes, any person with software knowledge could look at the code and the data and tell you what it might do in that scenario. That doesn't help when an new scenario is suddenly discovered mid-flight.

 

Now recall the data inputs on an aircraft system, say a flight control computer. It needs airspeed, angle of attack, flight control positions, flap positions, it might need CG, g-loading (Nz), mach numbers and more. It takes that information in, and then, based on what commands are given to it by the pilot (inputs), runs it through a process model and then out to the control surfaces, engines and other items it might managing. These can include elevators, ailerons, spoilers, rudders, flaps, slats, trim, etc. The output is dependent on the input coupled with the programming, which becomes the process model. OK, hold that thought for a moment.

 

What is your procedure if something happens on takeoff that is not in the books? No QRH, or at least, no immediate action items. Let's say a sensor failure, a loss of angle of attack, a loss of the g-force or even the inability of the computer to read a flight control position. You have some sort of fault indication (maybe) right after V1, what would you do? Most training programs would have you to continue the takeoff, positive rate, gear up, get to a safe altitude, clean it up, then troubleshoot (maybe), or perhaps just continue to the destination.

 

All nice, except for one little problem. Remember that computer process model? The computer's process model is now flawed due to bad data. The computer is unable to "know" the correct actions. Changing any aspect might result in an unexpected outcome as the computer mixes the new details of your changes with bad data. All that goes into the software for a new "decision" on what output to perform. This is, in a nutshell, what occurred on the Max accidents. Pilots retracted the flaps and BANG, MCAS was activated. A change was coupled with a bad input in a scenario not anticipated in the design. The rest is now aviation history. 

 

MCAS is not the only "gremlin" out there like this. There are others lurking. For example, something as seemingly innocuous as the pilot giving the computer commands in a way that was not anticipated by the designer can yield unexpected outcomes. So what can a pilot do? One suggestion is to consider the way computers work. If the airplane is flying ok, maybe it is worth considering not changing anything. No change to the configuration, no change to anything that is within the pilot control. Just keep it flying, and when any changes are made, be prepared for an unexpected outcome.

 

In a traditional legacy airplane this was no problem. Changing the flaps or the landing gear would be unlikely to entirely change the way the airplane handled. Changing the flaps would not suddenly trigger secondary systems in unexpected ways. That is no longer true. There is simply no way to train pilots to understand all the possible ways that every system might react to every circumstance. Arguably (and based on what we have seen) even the designers might not have considered all these possibilities. So I would argue that our procedures are not keeping up with the changes to our aircraft architecture. Until they get caught up, pilots need to have a much better understanding of how computers work and how they interact with the world around them."

 

The ARP reference is to ARP-4754A which is a guiding process that validates all technical requirement definitions of the aircraft components through testing, design analysis, and/or demonstration. The requirement validation exercise then becomes the basis for stating the DO-178 (Software)/DO-254 (Hardware) DAL (Design Assurance Level) are valid and consistent with the System Safety Analysis that now has hard data to back it up (rather than purely statistical data). It makes the agencies happy also and certification gets done way more quickly.


  • 0

#329 Stuart Galbraith

Stuart Galbraith

    Just Another Salisbury Tourist

  • Members
  • PipPipPipPipPipPipPipPipPipPip
  • 58,876 posts

Posted 25 June 2020 - 1054 AM

I remember reading that British Airway's (and presumably other airlines too) choose at the start of their career whether they want to be an Airbus pilot or a Boeing pilot, just so there is some kind of familiarity with the kind of different methods both companies setup their automation. Thinking back to the early days of Airbus there was continual problems with landing modes and rate of decent selection, simply because it was completely different to how the Americans, or anyone, had done it before.

 

To me it would make perfect sense to agree a set of principles that all manufacturers would try to agree to, or at least go out their way to flag if they do it differently. For example, there should be no reason why a different mode is selected by selecting flaps, without screaming it out to the pilot. I question whether it really should do it at all without the pilot explicitly selecting that mode. Even if it makes the operation more difficult, at least it means a step change he has to become familar with before it operates.

 

There was an excellent book I read in the school library called 'The Human Factor in Aircraft accidents'. There was a boeing stratocruiser crash where the pilots, facted with a dozen similar controls, selected the wrong one and the cowl flaps didnt close, and the aircraft crashed into the sea (or something like that). It took a decision by the pilots to modify some of the controls but putting gaskets or socks on them to show they were different, so they would always pick the right one. They shouldnt have had to, the designers should have recognised humans arent machines. Not necessarily automate the action (im not sure it was even possible then), just envisage humans can make mistakes, and design the system to give the pilot as many cues as possible. Thats possible to understand in the 1940's. Here we are over 100 years on from the birth of flight, and designers are STILL making similar mistakes.


  • 0

#330 Brasidas

Brasidas

    Member

  • Members
  • PipPip
  • 12,702 posts

Posted 25 June 2020 - 1152 AM

I remember reading that British Airway's (and presumably other airlines too) choose at the start of their career whether they want to be an Airbus pilot or a Boeing pilot, just so there is some kind of familiarity with the kind of different methods both companies setup their automation. Thinking back to the early days of Airbus there was continual problems with landing modes and rate of decent selection, simply because it was completely different to how the Americans, or anyone, had done it before.

 

To me it would make perfect sense to agree a set of principles that all manufacturers would try to agree to, or at least go out their way to flag if they do it differently. For example, there should be no reason why a different mode is selected by selecting flaps, without screaming it out to the pilot. I question whether it really should do it at all without the pilot explicitly selecting that mode. Even if it makes the operation more difficult, at least it means a step change he has to become familar with before it operates.

 

There was an excellent book I read in the school library called 'The Human Factor in Aircraft accidents'. There was a boeing stratocruiser crash where the pilots, facted with a dozen similar controls, selected the wrong one and the cowl flaps didnt close, and the aircraft crashed into the sea (or something like that). It took a decision by the pilots to modify some of the controls but putting gaskets or socks on them to show they were different, so they would always pick the right one. They shouldnt have had to, the designers should have recognised humans arent machines. Not necessarily automate the action (im not sure it was even possible then), just envisage humans can make mistakes, and design the system to give the pilot as many cues as possible. Thats possible to understand in the 1940's. Here we are over 100 years on from the birth of flight, and designers are STILL making similar mistakes.

Well, yea that's the thing. The design philosophies being implemented are based off of the newest data, when previous designs are using older data.

 

The primary design philosophy of aircraft design that Boeing adheres to is pilot-in-the-loop. The pilot is continuously tasked with staying in the flight process, and they have to be involved and capable of making the correct decisions for the aircraft to operate safely.

 

The primary design philosphy of aircraft design that Airbus adheres to, is automation-in-the-loop. The automation when designed correctly is almost infallible. The only time the pilot will be actively involved is when aircraft configuration needs to change, or the flight plan changes due to unforeseen circumstances. This offloads the crew a lot. The crew still have things to do, but they are minor and are more like baby sitting the automation and cross checking the data being displayed.

 

So, what's the benefit?

 

When pilot quality is high, and the design is good, the Boeing system works well. (MAX is a very bad design, hence the possibility of malfeasance in claiming safe design). When the design is bad and pilot quality is adequate, that may not be enough for safe operation as the MAX incidents show.

 

When pilot quality is adequate, and the design is good, the Airbus system works well. However, when you have system upsets outside of the System Safety Criteria, such as a triple ADS failure on Air France Flight 447, you have crew trying to cope with an unforeseen systemic failure (not trained for) that would be almost impossible to overcome. It's hard to deal with sudden loss of all speed data and attitude data at night with fault indications going off continuously no matter the circumstances. If the Air Data System was rated DAL B, for a triple ADS installation, the possibility would be considered at 1 in 1*10^18 flight hours, and is not required to be designed for even with catastrophic impact, and no mitigation would be necessary. That was a flaw in ADS design and SSA assumption. The Inertial systems could have kept displaying attitude data since they know their orientation, and they also have an intertial altitude/airspeed output, so that was available but the ASI (aircraft situation indicator) was designed to show loss of aircraft attitude data in those circumstances.

 

Really, in my opinion, a good design would have the capability to cope with multiple failures, and in the worst case, give aircrew any valid data available so they can use their own judgement and guide the aircraft as best they can to a resolution with survivors on the ground. There's no reason either design philosophy should lead to a worst case outcome.


  • 0

#331 DB

DB

    Crew

  • Members
  • PipPip
  • 11,699 posts

Posted 25 June 2020 - 1915 PM

The preliminary report for the PIA crash is out.

It looks like the aircraft was perfectly serviceable and ATC did at least initially the right things. The crew most definitely did not.

They never got close to a stabilised approach, they went over a waypoint 15 miles out at nearly 10000ft when they should have been at 3000, they never got near approach speeds, they set the landing gear at 240kt, selected flap settings that were not allowed for their airspeed, retracted the landing gear, touched down, applied reverse thrust and then went around.

The FDR quit when they lost all power. It's also possible the CVR stopped at the same time, but it definitely recorded all of the audible warnings they received, and their ignoring two ATC requests for them to orbit to lose height and speed. There is no transcript yet.

The gear was deployed normally at the time of impact, so there is strong evidence that it was working normally.

The pilot who was reading through the report was having a hard time comprehending the level of upfuckedness involved.
  • 0