At the dinner table, both Marty Jr. and Marlene have VR goggles. Marty wears his continuously, but Marlene is more polite and rests hers around her neck when with the family. When she receives a call, red LEDs flash the word PHONE on the outside of the goggles as they ring. This would be a useful signal if the volume were turned down or the volume was baffled by ambient sounds.
Marty Jr’s goggles are on, and he announces to Marty Sr. that the phone is for him and that its Needles.
This implies a complete wireless caller ID system (which had only just been released to market in the United States the year before the movie was released) and a single number for the household that is distributed amongst multiple communications devices simultaneously, which was not available at the time (or hey, even now), so it’s quite forward looking. Additionally, it lets the whole social circle help manage communication requests, even if it sacrifices a bit of privacy.
To get Jennifer into her home, the police take her to the front door of her home. They place her thumb on a small circular reader by the door. Radial LEDs circle underneath her thumb for a moment as it reads. Then a red light above the reader turns off and a green light turns on. The door unlocks and a synthesized voice says, Welcome home, Jennifer!
Similarly to the Thumbdentity, a multifactor authentication would be much more secure. The McFly family is struggling, so you might expect them to have substandard technology, but that the police are using something similar casts that in doubt.
The security alert occurs in two parts. The first is a paddock alert that starts on a single terminal but gets copied to the big shared screen. The second is a security monitor for the visitor center in which the control room sits. Both of these live as part of the larger Jurassic Park.exe, alongside the Explorer Status panel, and take the place of the tour map on the screen automatically.
Paddock Monitor
After Nedry disables security, the central system fires an alert as each of the perimeter fence systems go down. Each section of the fence blinks red, with a large “UNARMED” on top of the section. After blinking, the fence line disappears. To the right is the screen for monitoring vehicles.
As soon as the system starts detecting the disabled fences, it starts projecting the fence security diagram onto the main screen at the front of the Control Room for everyone to see, but with a status bar on the right reading “SECURITY, PADDOCKS, TRACKING, and VIDEO.”
Visitor Center
The system has a second screen showing security measures in the visitor center itself. It focuses on the security doors between public and private areas (dining, halls, the genetics lab, and the cryostorage).
In both cases, these security screens appear on the same computer showing the vehicle status. It replaces the island map. This isn’t a separate program, but is instead a replacement window, as shown by the identical data in the columns to the left and right of the map view.
Don’t Break Existing Mental Models
Throughout the security panels, there isn’t any consistency in color labeling. On the fences, red is good. On visitor center map, red is bad. On the glitches panel, red means that it should be looked at, but might not be bad.
First, accessibility standards say that color shouldn’t be the only indicator of status. Thankfully for this interface and its inconsistency, it at least has labeling. But that means that an operator needs to either memorize the entire panel before they can be proficient at it, or read each label every time.
Second, color standardization could be done with a little more creative background colors. Picking more neutral backgrounds—for example, the island on the fence map doesn’t need to be bright green, and could either be desaturated or a basic light grey—would allow the status colors to show up better and have more readable text.
Third, while the status indicators are labeled, the labels are written in system language instead of user language. “Clear” and “Check” can be understood with some work, but aren’t natural status labels in day-to-day society.
Keep Indicators
When the fences deactivate, they disappear off the screen. While this does show that they’re disabled, it removes the control room crew’s ability to quickly see what they can fix and where it is. Unless a room full of experts is looking at the screen, they won’t know where the T-Rex fence is and where to send work crews. Keeping the fences on the screen in a ‘disabled’ or ‘broken’ state would indicate the same information, while still providing direction.
The visitor center screen almost gets this right by changing the color, changing the label, and showing the door as open on the panel. Practically, this is what’s happening (the ‘Raptors can get through any door they want), but realistically some of those doors are actually closed.
In the case of the doors, it would make more sense to have the status change, but only have the door open if the system actually detects a door opening in the building.
Show the ’Raptors
This is the most critical screen in the park during an emergency, but it isn’t showing a critical status: The Velociraptor Pen.
Arnold has to roll over to the command console computer and type in a manual status request to learn that the raptor pen fences are still active. Given how important this status is to everyone in the park, it should be on the main map in some form or another.
Additionally, the system could show the status of secondary systems in a side pane:
Tour status
Camera feeds
Where dinosaurs are
What secondary equipment status is
When things start to go wrong on the island, this interface should provide guidance to the control room crew on what they need to do and what they need to fix (even if, in this case, the answer is “Everything”).
Organizing information, mapping status across screens, and providing lists of what needs to be fixed would give an understandable checklist to the park staff on what they should be doing.
The velociraptor pen is a concrete pit, topped with high-powered electric fences. There are two ways into the pen: a hole at the top of the pen for feeding, and a large armored door at ground level for moving ‘raptors in and out. This armored door has the first interface seen in the film, the velociraptor lock.
Velociraptors are brought from breeding grounds within the park to a secure facility in a large, heavily armored crate. Large, colored-light indicators beside the door indicate whether the armored cages are properly aligned with the door. The light itself goes from red when the cage is being moved, to yellow when the cage is properly aligned and getting close to the door, to green when the cage is properly aligned and snug against the concrete walls of the velociraptor pen. There is also a loud ‘clang’ as the light turns to green. It isn’t clear if this is an audio indicator from the pen itself, the cage hitting the concrete wall, or locks slamming into place; but if that audio cue wasn’t there, you’d want something like it since the price for getting that wrong is quite high.
The complete interface consists of four parts (kind of, read on): The lights, the door, the lock, and the safety. More on each below.
1. The Deceiving Lights
The lights are the most obvious part of the system (aside from the cage and pen). Everyone who is watching the cage also has a clear view of the lights – there is an identical set on the other side of the cage for the other half of the safety/moving crew.
2. The Door
The Velociraptor pen’s door is perfectly shaped to accept the heavily armored cage, and is equipped with a rail system to keep the cage aligned properly with the door. Though it takes eight workers to move the cage, they appear to be able to push the cage reasonably easily. When the light turns green, the workers move back to allow the gate to be manually raised on the cage, letting the caged velociraptor escape into the pen.
3. The “Lock”
Or, lack thereof…
Every indication (the lineup of the cage, the green lights, and the heavy metallic ‘clang’) gives the feeling of a secure mating between the cage and the pen. All of the workers relax, as if they’re sure they’re as safe as they can be. But you can be certain, this is a false sense of security.
As soon as the velociraptor decides to test the lock, it is able to push the cage away from the pen wall. The light near the door instantly changes from green back to red.
Narratively, this underscores some of the risks of the park, i.e. that it’s cheaply engineered despite appearances, and extra-diegetically sets the audience on edge since it’s not sure what it can trust. But, for us in the real world, given the many indications that the system was safe, it should have actually been safe.
4. The Safety
When the clever velociraptor knocks the cage back, a worker falls in and becomes an unscheduled snack. Attendant workers try to help using…
The Cattle Prods
When the gate master falls and gets snatched by the velociraptor in the cage, workers immediately rush in and start hitting her with cattle prods. There are at least six prods being used, possibly more.
Since this is the first line of backup defense, the cattleprods should have been iterated until they actually deterred the ‘raptors. Clearly, effort went into making the perimeter defenses secure against the larger dinosaurs. The same effort should have gone into making the cattle prods effective against velociraptors.
Design for Success
The Velociraptor pen door seems custom-designed for serious failure: No hard locks to keep the cage in place, horrible sight-lines, and manual controls in places that make it dangerous for workers. Even the solid feedback system only adds to the danger. It lulls the workers into thinking the system is safe.
Most, if not all, of these issues would be solved by a simple physical locking device on the cage. Something to hold the cage in place while the doors are open would maintain a secure pen and keep everyone outside safe. It would also eliminate the need for most of the support crew, who only end up getting in each other’s way.
To add to the safety, the park designers should have paid more attention to where people would be standing during the transfer process. The armed guards (theoretically there to be a second line of defense), are placed in such a way that only a few of them are able to effectively fire. Other guards on scene would have to fire past their fellow guards.
Presumably, this is why the armed guards don’t actually fire at the ‘raptor when Muldoon shouts to “Shoot her! Shooooooot her!!”
Keep the feedback…
The feedback systems of the cage are remarkably successful, for a placebo. The lights, sounds, and placement keep the workers and audience calm right up until things go horribly wrong. With the addition of Muldoon’s organizational skill and animal handling skills, the feedback system is worth taking notes on.
…but make it mean something
The velociraptor pen was designed to tell the workers what state it was in, but not to actually keep them safe. Muldoon’s precautions try to make up for the system’s failures, but only add to the problems as the workers trip over each other.
Jack lands in a ruined stadium to do some repairs on a fallen drone. After he’s done, the drone takes a while to reboot, so while he waits, Jack’s mind drifts to the stadium and the memories he has of it.
Present information as it might be shared
Vika was in comms with Jack when she notices the alarm signal from the desktop interface. Her screen displays an all-caps red overlay reading ALERT, and a diamond overlaying the unidentified object careening toward him. She yells, “Contact! Left contact!” at Jack.
As Jack hears Vika’s warning, he turns to look drawing his pistol reflexively as he crouches. While the weapon is loading he notices that the cause of the warning was just a small, not-so-hostile dog.
Although Vika yells about something coming from the left side, by looking at the screen you can kind of tell that it’s more to his back—his 6 or 7 o’ clock—than left. We’re seeing it with time to spare here, and the satellite image is very low-res, so we can cut her some slack. But given all the sensors at its command, the interface would ideally which way Jack is facing and which way the threat approaches, so she can convey correct and useful information quickly.
“Contact, at your 6, Jack!”
That’s much more precise and actionable for Jack.
Don’t cover information
It might be useful to put the ALERT overlay somewhere other than on top of Jack, since it might obscure some useful information. Perhaps the “chrome” of the interface could turn red? Not as instantly readable for the audience, but if we’re designing for Vika…
Provide specifics
Another issue is that neither the satellite image nor the interface help Vika to identify what ends up being just a dog. Even when Jack manages to stay cool through the little scare jump, adding at least some information about the object would go a long way to make Vika and the situation less tense.
Jack’s encounter with the TET gives clear evidence that the TET has sophisticated computer vision, so the interface could help Vika a bit by “guessing” what any questionable object might be. It doesn’t need to be exact (and it probably couldn’t be with that kind of video feed) but the computer could give its educated guess just by analyzing the context, shape, and motion compared against things in the database. So instead of telling there is an 87% chance of being a dog or a 76% chance of being a fox, the interface could just predict unknown animal (see below).
Share off-screen information
Fast viewers saw the unknown object before the warning. During a split of a second while the object is entering the screen, it remains blue. So the computer does keep track of any movement, even if it’s not a threat. In that case the issue is that the computer seems to be tracking movement far beyond the visible area of the screen but it doesn’t let Vika know something’s coming from off-screen. The display doesn’t need to zoom out to reach the contact—that could distract Vika from following Jack—but at least it could show some kind of signal pointing at the incoming contact.
What of multiple contacts?
I’m cautious to talk about what ifs, since most of it is just guesswork—but bear with me. On the sequence the interface keeps track of just one contact, but how it would behave if there were more than one? If the computer does track of contacts beyond the camera display Vika is watching, then just marking them is not enough. If Vika needs to inform Jack on the number of contacts she’s getting on the screen, then you need some sort of overview. Pointing at the direction of the contact is useful, but it does mean you have to sweep all the screen to know how many of them are. But that can be easily fixed by adding a list of all the current contacts.
Show trending
Pausing the film a bit and looking closely, it seems that the only difference between all-is-fine and contact! with the dog is about a meter long. And what is more, by the time the interface triggers the warning the dog is really close to Jack. If that was feral dog and it was to attack him, the warning to Jack would come very late.
In such mission-critical monitoring, it’s not enough to show changes of state. Change the state subtly to indicate as things are trending—as in, this dog is likely to continue its intercept course and getting closer.
We got this
So to wrap up, the interface does a well enough job, but it could certainly benefit from some design changes. The issues are ones that any designer might have to face when working with a monitoring interface, so worth summarizing.
Share all the information that is at hand
Give the user the information in the form they might pass it along
Assign an easy-to-distinguish hierarchy: information, suspicion, warning
Provide best-guesses as to the nature of problems with as much specificity as you can
Provide unobtrusive but clear signals about the mode
As Vika is looking at the radar and verifying visuals on the dispatched drones with Jack, the symbols for drones 166 and 172 begin flashing red. An alert begins sounding, indicating that the two drones are down.
Vika wants to send Jack to drone 166 first. To do this she sends Jack the drone coordinates by pressing and holding the drone symbol for 166 at which time data coordinates are displayed. She then drags the data coordinates with one finger to the Bubbleship symbol and releases. The coordinates immediately display on Jack’s HUD as a target area showing the direction he needs to go.
Simple interactions
Overall, the sequence of interactions for this type of situation is pretty simple and well thought out. Sending coordinates is as simple as:
Tap and hold on the symbol of the target (in this case the drone) using one finger
A summary of coordinates data is displayed around the touchpoint (drone symbol)
Drag data over to the symbol of the receiver (in this case the Bubbleship)
Then on Jack’s side, the position of the coordinates target on his HUD adjusts as he flies toward the drone. Can’t really get much simpler than that.
However…
When Vika initially powers up the desktop, the drone status feed already shows drones 166 and 172 down. This is fine, except the alert sound and blinking icons on the TETVision don’t occur until Jack has already reached the hydro-rigs. This is quite a significant time lag between the drone status feed and the TETVision feed. It would be understandable if there was a slight delay in the alert sound upon startup. An immediate alert sound would likely mean there is something wrong with the TETVision system itself. That said, the TETVision drone icons should at the very least already be blinking red on load.
Monitoring drone 166
As Jack is repairing drone 166, Vika watches the drone status feed on her desktop. The drone status feed is a dedicated screen to the right of the TETVision feed.
It is divided into two main sections, the drone diagnostic matrix to the left and the drone deployment table to the right.
The dispatched drone table lists all drones currently working the security perimeter and lists an overview of information including drone ID, a diagram and operational status. The drone diagnostic matrix shows data such as fuel status and drone positioning along the perimeter as well as a larger detailed diagram of the selected drone.
By looking at the live diagnostics diagram, Vika is able to immediately tell Jack that the central core is off alignment. As soon as Jack finishes repairing the central core, the diagram updates that the core is back in alignment and an alert sound pings.
How does the feed know which drone to focus on?
Since there is no direct interaction with this monitor shown in the film, it is assumed to be an informational display. So, how does the feed know which drone to focus on for diagnostics?
One possibility could be that Jack transmits data from the ground through his mobile drone programmer handset, which is covered in another post. However, a great opportunity for an example of agentive tech would be that when Vika sends the drone coordinates to the Bubbleship, the drone status feed automatically focuses on that one for diagnostics.
Clear messaging in real-time…almost
Overall, the messaging for drone status feed is clear and simple. As seen in the drone deployment table, the dataset for operational drones includes the drone ID number and a rotating view of the drone schematic. If the drone is down, the ID number fades and the drone schematic is replaced with a flashing red message stating that the drone is offline. Yet, when the drone is repaired, the display immediately updates to show that everything is operational again.
This is one of the basic fundamentals of good user interface design. Don’t let the UI get in the way and distract the user.
When students want to know the results of their tests, they do so by a public interface. A large, tiled screen is mounted to a recessed section of wall in a courtyard. The display is divided into a grid of five columns and three rows. Each cell contains one student’s results for one test, as a percentage. One cell displays an ad for military service. Another provides a reminder for the upcoming sports game. Four keyboards are situated below the screens at waist level.
To find her score, Carmen approaches one of the keyboards and enters some identifying data. In response, the column above the screen displays her score and moves the data in the other cells up. There is no way to learn of one’s test scores privately. This hits Johnny particularly hard when he checks his scores to find he has earned 35% on his Math Final, a failing grade.
Worse, his friend Carl is able to walk up to the keyboard and with a few key presses, interrupt every other student looking at the grades, and fill the entire screen with Johnny’s score for all to see, with the failing number blinking red and white, ridiculing him before his peers. After a reprimand from Johnny, Carl returns the display to normal with the press of a button.
Is ANSI the right input?
The keyboard would be a pain to keep clean, and you’d figure that a student ID would be a unique-and-memorable enough token. Does an entire ANSI keyboard need to be there? Wouldn’t a number pad be enough? But why a manual input at all? Nowadays you’d expect some near-field communication, or biometric token, which would obviate the keyboard entirely.
Are publicizing grades OK?
So there are input and interaction improvements to be made, for sure. But there’s more important issues to talk about here. Yes, students can accomplish one task with the interface well enough: Checking grades. But what about the giant, public output?
It’s fullfilling one of the dystopian goals of the fascist society in which the story takes place, which is that might makes right. Carl is a bully (even if Jonny’s friend) and in the culture of Starship Troopers, if he wants to increase Johnny’s public humiliation, why not? Johnny needs to study harder, take it on the chin, or make Carl stop. In this regard, the interface satisfies both the students’ task and the culture’s…um…values.
I originally wanted to counter that with a strong statement that, “But that’s not us.” After all, modern federal privacy laws in the United States forbid this public display as a violation of students’ privacy. (See FERPA laws.) But apparently not everyone believes this. A look on debate.org (at the time of writing) shows that opinion is perfectly split on the topic. I could lay out my thoughts on which side is better for learning, but it’s really beyond the scope of this blog to build a case for either side of Lakoff’s Moral Politics.
You’re Doing More Than You Think You’re Doing
But it’s worth noting the scope of these issues at hand. This seems at first to be an interface just about checking grades, but when you look at the ecosystem in which it operates, it actually illustrates and reinforce a culture’s core virtues. The interface is sometimes not just the interface. Its designers are more than flowchart monkeys.
When it refused to give up authority, the Captain wrested control of the Axiom from the artificial intelligence autopilot, Otto. Otto’s body is the helm wheel of the ship and fights back against the Captain. Otto wants to fulfil BNL’s orders to keep the ship in space. As they fight, the Captain dislodges a cover panel for Otto’s off-switch. When the captain sees the switch, he immediately realizes that he can regain control of the ship by deactivating Otto. After fighting his way to the switch and flipping it, Otto deactivates and reverts to a manual control interface for the ship.
The panel of buttons showing Otto’s current status next to the on/off switch deactivates half its lights when the Captain switches over to manual. The dimmed icons are indicating which systems are now offline. Effortlessly, the captain then returns the ship to its proper flight path with a quick turn of the controls.
One interesting note is the similarity between Otto’s stalk control keypad, and the keypad on the Eve Pod. Both have the circular button in the middle, with blue buttons in a semi-radial pattern around it. Given the Eve Pod’s interface, this should also be a series of start-up buttons or option commands. The main difference here is that they are all lit, where the Eve Pod’s buttons were dim until hit. Since every other interface on the Axiom glows when in use, it looks like all of Otto’s commands and autopilot options are active when the Captain deactivates him.
A hint of practicality…
The panel is in a place that is accessible and would be easily located by service crew or trained operators. Given that the Axiom is a spaceship, the systems on board are probably heavily regulated and redundant. However, the panel isn’t easily visible thanks to specific decisions by BNL. This system makes sense for a company that doesn’t think people need or want to deal with this kind of thing on their own.
Once the panel is open, the operator has a clear view of which systems are on, and which are off. The major downside to this keypad (like the Eve Pod) is that the coding of the information is obscure. These cryptic buttons would only be understandable for a highly trained operator/programmer/setup technician for the system. Given the current state of the Axiom, unless the crew were to check the autopilot manual, it is likely that no one on board the ship knows what those buttons mean anymore.
Thankfully, the most important button is in clear English. We know English is important to BNL because it is the language of the ship and the language seen being taught to the new children on board. Anyone who had an issue with the autopilot system and could locate the button, would know which button press would turn Otto off (as we then see the Captain immediately do).
Considering that Buy-N-Large’s mission is to create robots to fill humans’ every need, saving them from every tedious or unenjoyable job (garbage collecting, long-distance transportation, complex integrated systems, sports), it was both interesting and reassuring to see that there are manual over-rides on their mission-critical equipment.
…But hidden
The opposite situation could get a little tricky though. If the ship was in manual mode, with the door closed, and no qualified or trained personnel on the bridge, it would be incredibly difficult for them to figure out how to physically turn the ship back to auto-pilot. A hidden emergency control is useless in an emergency.
Hopefully, considering the heavy use of voice recognition on the ship, there is a way for the ship to recognize an emergency situation and quickly take control. We know this is possible because we see the ship completely take over and run through a Code Green procedure to analyze whether Eve had actually returned a plant from Earth. In that instance, the ship only required a short, confused grunt from the Captain to initiate a very complex procedure.
Security isn’t an issue here because we already know that the Axiom screens visitors to the bridge (the Gatekeeper). By tracking who is entering the bridge using the Axiom’s current systems, the ship would know who is and isn’t allowed to activate certain commands. The Gatekeeper would either already have this information coded in, or be able to activate it when he allowed people into the bridge.
For very critical emergencies, a system that could recognize a spoken ‘off’ command from senior staff or trained technicians on the Axiom would be ideal.
Anti-interaction as Standard Operating Procedure
The hidden door, and the obscure hard-wired off button continue the mission of Buy-N-Large: to encourage citizens to give up control for comfort, and make it difficult to undo that decision. Seeing as how the citizens are more than happy to give up that control at first, it looks like profitable assumption for Buy-N-Large, at least in the short term. In the long term we can take comfort that the human spirit–aided by an adorable little robot–will prevail.
So for BNL’s goals, this interface is fairly well designed. But for the real world, you would want some sort of graceful degradation that would enable qualified people to easily take control in an emergency. Even the most highly trained technicians appreciate clearly labeled controls and overrides so that they can deal directly with the problem at hand rather than fighting with the interface.
After the security ‘bot brings Eve across the ship (with Wall-e in tow), he arrives at the gatekeeper to the bridge. The Gatekeeper has the job of entering information about ‘bots, or activating and deactivating systems (labeled with “1”s and “0”s) into a pedestal keyboard with two small manipulator arms. It’s mounted on a large, suspended shaft, and once it sees the security ‘bot and confirms his clearance, it lets the ‘bot and the pallet through by clicking another, specific button on the keyboard.
The Gatekeeper is large. Larger than most of the other robots we see on the Axiom. It’s casing is a white shell around an inner hardware. This casing looks like it’s meant to protect or shield the internal components from light impacts or basic problems like dust. From the looks of the inner housing, the Gatekeeper should be able to move its ‘head’ up and down to point its eye in different directions, but while Wall-e and the security ‘bot are in the room, we only ever see it rotating around its suspension pole and using the glowing pinpoint in its red eye to track the objects its paying attention to.
When it lets the sled through, it sees Wall-e on the back of the sled, who waves to the Gatekeeper. In response, the Gatekeeper waves back with its jointed manipulator arm. After waving, the Gatekeeper looks at its arm. It looks surprised at the arm movement, as if it hadn’t considered the ability to use those actuators before. There is a pause that gives the distinct impression that the Gatekeeper is thinking hard about this new ability, then we see it waving the arm a couple more times to itself to confirm its new abilities.
The Gatekeeper seems to exist solely to enter information into that pedestal. From what we can see, it doesn’t move and likely (considering the rest of the ship) has been there since the Axiom’s construction. We don’t see any other actions from the pedestal keys, but considering that one of them opens a door temporarily, it’s possible that the other buttons have some other, more permanent functions like deactivating the door security completely, or allowing a non-authorized ‘bot (or even a human) into the space.
An unutilized sentience
The robot is a sentient being, with a tedious and repetitive job, who doesn’t even know he can wave his arm until Wall-e introduces the Gatekeeper to the concept. This fits with the other technology on board the Axiom, with intelligence lacking any correlation to the robot’s function. Thankfully for the robot, he (she?) doesn’t realize their lack of a larger world until that moment.
So what’s the pedestal for?
It still leaves open the question of what the pedestal controls actually do. If they’re all connected to security doors throughout the ship, then the Gatekeeper would have to be tied into the ship’s systems somehow to see who was entering or leaving each secure area.
The pedestal itself acts as a two-stage authentication system. The Gatekeeper has a powerful sentience, and must decide if the people or robots in front of it are allowed to enter the room or rooms it guards. Then, after that decision, it must make a physical action to unlock the door to enter the secure area. This implies a high level of security, which feels appropriate given that the elevator accesses the bridge of the Axiom.
Since we’ve seen the robots have different vision modes, and improvements based on their function, it’s likely that the Gatekeeper can see more into the pedestal interface than the audience can, possibly including which doors each key links to. If not, then as a computer it would have perfect recall on what each button was for. This does not afford a human presence stepping in to take control in case the Gatekeeper has issues (like the robots seen soon after this in the ‘medbay’). But, considering Buy-N-Large’s desire to leave humans out of the loop at each possible point, this seems like a reasonable design direction for the company to take if they wanted to continue that trend.
It’s possible that the pedestal was intended for a human security guard that was replaced after the first generation of spacefarers retired. Another possibility is that Buy-N-Large wanted an obvious sign of security to comfort passengers.
What’s missing?
We learn after this scene that the security ‘bot is Otto’s ‘muscle’ and affords some protection. Given that the Security ‘bot and others might be needed at random times, it feels like he would want a way to gain access to the bridge in an emergency. Something like an integrated biometric scanner on the door that could be manually activated (eye scanner, palm scanner, RFID tags, etc.), or even a physical key device on the door that only someone like the Captain or trusted security officers would be given. Though that assumes there is more than one entrance to the bridge.
This is a great showcase system for tours and commercials of an all-access luxury hotel and lifeboat. It looks impressive, and the Gatekeeper would be an effective way to make sure only people who are really supposed to get into the bridge are allowed past the barriers. But, Buy-N-Large seems to have gone too far in their quest for intelligent robots and has created something that could be easily replaced by a simpler, hard-wired security system.
While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.
A Well Practiced Design
The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.
Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.
Equal Opportunity Alerts
By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.
For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.
Why Not Network It?
Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.
Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.