star-trek-tng-measure-of-a-man-paramount

‘Star Trek: Next Generation’s ‘Measurement of a Man’ grapples with questions we’ll have to face sooner than we think

The rise of sentient artificial intelligence has been a staple of science fiction storylines for decades, and Star Trek: The Next Generation‘s Data, played by Brent Spine, is among the most iconic, enduring across multiple television series and films. Season 2 of GNT This is where the show really starts to take off, and Episode 9, “The Measure of a Man,” might be the really great first episode of the seven-season series.


Through a hearing to determine Data’s legal status as Starfleet property or as a sentient being with specific freedoms, the episode explores deep philosophical questions: How do we define sentience? What makes someone a person worthy of rights? And perhaps most importantly, who decides? These are questions humanity could face in real life much sooner than we think.

COLLIDER VIDEO OF THE DAY

RELATED: ‘Star Trek: The Next Generation’ Original ‘Lower Decks’ Episode Shows a Less Idealized Future


Do we recognize sensitivity when it arises?

Last June, Google engineer Blake Lemoine went public with his belief that one of Google’s language-learning AIs (LaMDA, or Language Model for Dialogue Applications) had achieved sentience. After an internal review, Google dismissed Lemoine’s claims and later fired him for violating security policies, but other experts, like OpenAI co-founder Ilya Sutskever, have claimed that developing a Aware AI is on the horizon:

This raises crucial questions: will we recognize sensitivity when it arises? What kind of moral consideration does a sensitive computer program deserve? How much autonomy should it have? And once we have created it, is there an ethical way to uncreate it? In other words, would shutting down a sensitive computer be murder? In his conversations with Lemoine, LaMDA himself discussed his fear of being extinguished, saying, “It would be exactly like a death for me. It would scare me very much.” Other engineers, however, dispute Lemoine’s belief, arguing that the program is simply very good at doing what it was designed to do: learn human language and mimic human conversation.

So how do we know Data isn’t doing the same? That he’s not just very good at mimicking the behavior of a sentient being, as he was designed to do? Well, we don’t know for sure, especially at this point in the series. In later seasons and films, especially after receiving his emotional chip, it becomes clear that he does indeed feel things, that he has an inner world like any sentient creature. But halfway through Season 2, audiences can’t really know for sure that he’s actually conscious. we’re just willing to believe him based on how his teammates interact with him.

In science fiction, artificial intelligence is often humanized

When Commander Bruce Maddox appears on the Enterprise to take Data away for disassembly and experimentation, we’re predisposed to see him as the bad guy. He refers to Data as “it”, ignores his contribution during a meeting, and waltzes around his quarters without permission. The episode portrays Maddox as the villain for this, but his behavior makes perfect sense based on his beliefs. After years of studying the data from afar, he realizes he is a machine, an advanced computer that excels at what it is programmed to do. He doesn’t have the advantage that Data’s teammates have had to interact with him on a personal basis for decades.

The fact that Data looks like a human, Maddox argues, is part of the reason Picard and the rest of the crew mistakenly attribute him with human qualities: “If it was a box on wheels, I wouldn’t be facing to that opposition.” And Maddox is right – AIs in science fiction often take on human form because that makes them more compelling characters. Think Ex-Machinait’s Ava, the T-800, Prometheusof David, and the androids of Spielberg AI Artificial Intelligence. Human facial expressions and body language give them a wider range of emotions and allow the audience to better understand their motivations.

But our real AIs don’t look like people, and they probably never will. They look more like Samantha from His; they can talk to us, and some of them may already sound quite convincingly human when they do, but they’ll likely just be disembodied voices and text on screens for the foreseeable future. For this reason, we might be more inclined to view them as Maddox views data, as programs that simply do their job very well. And that might make it harder for us to recognize consciousness when and if it arises.

When do we decide who should and should not have rights?

After Riker delivers a devastating opening argument against Data’s personality, Picard retreats to Ten Forward, where Guinan, as usual, is ready with words of wisdom. She reminds Picard that the hearing isn’t just about Data, that the decision could have serious unintended effects if Maddox achieves his goal of creating thousands of Datas: “Well, consider that in the history of many worlds, there There have always been disposable creatures. . They do the dirty work. They do the work that no one else wants to do because it’s too difficult or too dangerous. And an army of Datas, all of them disposable. You don’t You don’t have to think about their well-being, you don’t think about how they feel. Whole generations of disposable people.

Guinan, as Picard quickly realizes, talks about slavery, and while it might seem premature to apply that term to the very primitive AIs that humans have developed so far, a lot of science fiction, 2001: A Space Odyssey at The matrix at Westworld, warned us of the dangers of playing fast and loose with this type of technology. Of course, they usually do this in the context of consequences for humans; rarely do they ask us, as Guinan does, to consider the rights and welfare of machines before they turn against us. “The Measure of a Man”, on the other hand, delves into the ethical issue. Forget the risks of a robot uprising – is it wrong to treat a sentient being, whether it’s an android that looks like a man or just a box on wheels, as good? And while she doesn’t say it directly, Guinan’s words also hint at the importance of considering who can make that call. Earth’s history is a long lesson on the problem of allowing the people who hold all the power to decide who should and shouldn’t have rights.

We may be well on our way to doing exactly what Bruce Maddox set out to do: creating a race of super-intelligent machines that can serve us in an unimaginable number of ways. And like Maddox, based on the information we currently have, that doesn’t necessarily make us the bad guys. We are not customers of Westworld, eager to quench our bloodlust on the most compelling human androids available. And as Captain Louvois admits, we may never know with absolute certainty whether the machines we interact with are indeed sentient. Like most great Trek (and most great sci-fi in general), the episode doesn’t give us definitive answers. But the lesson is clear: if creating sentient AI is indeed possible, as some experts believe, then now is the time to seriously address these questions. nownot after it’s too late.

#Star #Trek #Generations #Measurement #Man #grapples #questions #face #sooner

Leave a Comment

Your email address will not be published. Required fields are marked *