Unknown: k*ller Robots (2023)

Curious minds want to know... documentary movie collection.
Watch Docus Amazon   Docus Merchandise

Documentary movie collection.
Post Reply

Unknown: k*ller Robots (2023)

Post by bunniefuu »

[cryptic music plays]

[music intensifies]

[woman 1] I find AI to be awe-inspiring.

All right, circling up. Good formation.

[woman 1] AI has the potential

to eliminate poverty,

give us new medicines,

and make our world even more peaceful.

[man 1] Nice work.

[woman 1] But there are so many risks

along the way.

[music crescendos]

With AI, we are essentially creating

a non-human intelligence

that is very unpredictable.

As it gets more powerful,

where are the red lines

we're going to draw with AI,

in terms of how we want to use it,

or not use it?

There is no place that is ground zero

for this conversation

more than m*llitary applications.

The b*ttlefield has now become

the province of software and hardware.

[man 1] Target has been acquired...

[woman 1] And militaries are racing

to develop AI faster

than their adversaries.

[beeping]

[man 1] I'm dead.

[tense music plays]

[radio chatter]

[woman 1] We're moving towards a world

where not only major militaries,

but non-state actors, private industries,

or even our local police department

down the street

could be able to use these weapons

that can autonomously k*ll.

Will we cede the decision to take a life

to algorithms, to computer software?

It's one of the most pressing issues

of our time.

And, if not used wisely,

poses a grave risk

to every single person on the planet.

[buzzing]

[music crescendos]

[ominous music plays]

[music crescendos]

[flies buzzing]

[electronic buzzing]

[tense music plays]

[indistinct radio chatter]

[male voice] Good work out there, guys.

[muffled radio chatter]

[music intensifies]

[indistinct radio chatter]

[g*nshots]

[g*nsh*t]

[radio chatter]

[laughter]

[tech 1] That's a wider lens

than we had before.

[man 2] Really?

[tech 1] You can see a lot more data.

Very cool.

At Shield AI, we are building an AI pilot

that's taking self-driving,

artificial intelligence technology

and putting it on aircraft.

[cryptic music plays]

When we talk about an AI pilot,

we think about giving an aircraft

a higher level of autonomy.

They will be solving problems

on their own.

Nova is an autonomous quadcopter

that explores buildings

and subterranean structures

ahead of clearance forces

to provide eyes and ears in those spaces.

You can definitely tell

a ton of improvements

since we saw it last.

[tech 2] We're working

on some exploration changes today.

We're working a little

floor-by-floor stuff.

It'll finish one floor, all the rooms,

before going to the second.

- That's awesome.

- We put in some changes recently...

A lot of people often ask me

why artificial intelligence

is an important capability.

And I just think back to the missions

that I was executing.

[dramatic music plays]

Spent seven years in the Navy.

I was a former Navy SEAL

deployed twice to Afghanistan,

once to the Pacific Theater.

In a given day, we might have to clear

150 different compounds or buildings.

[tense music plays]

One of the core capabilities

is close-quarters combat.

Gunfighting at extremely close ranges

inside buildings.

[g*nsh*t]

You are getting sh*t at.

There are IEDs

potentially inside the building.

[yelling]

It's the most dangerous thing

that any special operations forces member,

any infantry member,

can do in a combat zone.

Bar none.

[somber music plays]

For the rest of my life

I'll be thankful for my time in the Navy.

There are a collection of moments

and memories that, when I think about,

I certainly get emotional.

It is clich that freedom isn't free,

but I 100% believe it.

[quavers] Um, I've experienced it,

and it takes a lot of sacrifice.

Sorry.

When something bad happens

to one of your teammates,

whether they're hurt or they're k*lled,

um, it's just a...

It's a really tragic thing.

You know, for me now

in the work that we do,

it's motivating to um... be able to

you know,

reduce the number of times

that ever happens again.

[ominous music plays]

[man 3] In the late 2000s,

there was this awakening

inside the Defense Department

to what you might call

the accidental robotics revolution.

We deployed thousands of air

and ground robots to Iraq and Afghanistan.

[man 4] When I was asked

by the Obama administration

to become the Deputy Secretary of Defense,

the way w*r was fought...

uh, was definitely changing.

Robots were used

where people would have been used.

[Paul] Early robotics systems

were remote-controlled.

There's a human driving it, steering it,

like you might a remote-controlled car.

[Bob] They first started generally

going after improvised expl*sive devices,

and if the b*mb blew up,

the robot would blow up.

Then you'd say, "That's a bummer.

Okay, get out the other robot."

[tense music plays]

[woman 2] In Afghanistan,

you had the Predator drone,

and it became a very, very useful tool

to conduct airstrikes.

[Paul] Over time, m*llitary planners

started to begin to wonder,

"What else could robots be used for?"

And where was this going?

And one of the common themes

was this trend towards greater autonomy.

[woman 2] An autonomous w*apon

is one that makes decisions on its own,

with little to no human intervention.

So it has an independent capacity,

and it's self-directed.

And whether it can k*ll

depends on whether it's armed or not.

[beeping]

[Bob] When you have more and more autonomy

in your entire system,

everything starts to move

at a higher clock speed.

And when you operate

at a faster pace than your adversaries,

that is an extraordinarily

big advantage in battle.

[man 5] What we focus on as it relates

to autonomy

is highly resilient intelligence systems.

[buzzing]

Systems that can read and react

based on their environment,

and make decisions

about how to maneuver in that world.

The facility that we're at today

was originally built as a movie studio

that was converted over

to enable these realistic

m*llitary training environments.

[ominous music plays]

We are here to evaluate our AI pilot.

The mission is looking for threats.

It's about clearance forces.

It can make a decision

about how to att*ck that problem.

[man 6] We call this "the fatal funnel."

You have to come through a doorway.

It's where we're most vulnerable.

This one looks better.

[man 6] The Nova lets us know,

is there a sh**t behind that door,

is there a family behind that door?

It'll allow us to make better decisions

and keep people out of harms way.

[tense music plays]

[music intensifies]

[man 7] We use the vision sensors

to be able to get an understanding

of what the environment looks like.

It's a multistory building.

Here's the map.

While I was exploring,

here's what I saw and where I saw them.

[music crescendos]

[Brandon] Person detector. That's sweet.

[tense music continues]

[man 7] One of the other sensors

onboard Nova is a thermal scanner.

If that's 98.6 degrees,

it probably is a human.

People are considered threats

until deemed otherwise.

It is about eliminating the fog of w*r

to make better decisions.

[music builds]

And when we look to the future,

we're scaling out to build teams

of autonomous aircraft.

[music crescendos]

[low buzzing]

[man 7] With self-driving vehicles,

ultimately the person has said to it,

"I'd like you to go

from point A to point B."

[low buzzing]

Our systems are being asked

not to go from point A to point B,

but to achieve an objective.

[cryptic music plays]

It's more akin to, "I need milk."

And then the robot would have to

figure out what grocery store to go to,

be able to retrieve that milk,

and then bring it back.

And even more so,

it may be more appropriately stated as,

"Keep the refrigerator stocked."

And so, this is a level of intelligence

in terms to figuring out what we need

and how we do it.

And if there is a challenge, or a problem,

or an issue arises,

figure out how to mitigate that.

[cryptic music continues]

[Brandon] When I had made the decision

to leave the Navy, I started thinking,

"Okay. Well, what's next?"

[bell ringing]

I grew up with the Internet.

Saw what it became.

And part of the conclusion

that I had reached was...

AI in 2015

was really where the Internet was in 1991.

And AI was poised to take off

and be one of the most

powerful technologies in the world.

Working with it every single day,

I can see the progress that is being made.

But for a lot of people,

when they think "AI,"

their minds immediately go to Hollywood.

[beeping]

[computer voice] Shall we play a game?

How about Global Thermonuclear w*r?

Fine.

[dramatic music plays]

[man 8] When people think

of artificial intelligence generally,

they might think of The Terminator.

Or I, Robot.

Deactivate.

What am I?

[man 8] Or The Matrix.

Based on what you see

in the sci-fi movies,

how do you know I'm a human?

I could just be computer generated AI.

Replicants are like any other machine.

They're either a benefit or a hazard.

[Andrew] But there's all sorts

of more primitive AIs,

that are still going to change our lives

well before we reach

the thinking, talking robot stage.

[woman 3] The robots are here.

The robots are making decisions.

The robot revolution has arrived,

it's just that it doesn't look like

what anybody imagined.

[film character] Terminator's

an infiltration unit.

Part man, part machine.

[man 9] We're not talking about

a Terminator-style k*ller robot.

We're talking about AI

that can do some tasks that humans can do.

But the concern is

whether these systems are reliable.

[reporter 1] New details in last night's

crash involving a self-driving Uber SUV.

The company created

an artificial intelligence chatbot.

She took on a rather r*cist tone...

[reporter 2] Twenty-six state legislators

falsely identified as criminals.

The question is whether they can handle

the complexities of the real world.

[birds chirping]

[somber music plays]

[man 10] The physical world

is really messy.

There are many things that we don't know,

making it much harder to train AI systems.

[upbeat music plays]

That is where machine learning systems

have started to come in.

[man 11] Machine learning

has been a huge advancement

because it means that we don't have

to teach computers everything.

[music intensifies]

You actually give a computer

millions of pieces of information,

and the machine begins to learn.

And that could be applied to anything.

[Pulkit] Our Robot Dog project,

we are trying to show

that our dog can walk across

many, many diverse terrains.

Humans have evolved

over billions of years to walk,

but there's a lot of intelligence

in adapting to these different terrains.

The question remains

for robotic systems is,

could they also adapt

like animals and humans?

[music crescendos]

[cryptic music plays]

With machine learning,

we collect lots and lots

of data in simulation.

A simulation is a digital twin of reality.

We can have many instances of that reality

running on different computers.

It samples thousands of actions

in simulation.

The ground that they're encountering

has different slipperiness.

It has different softness.

We take all the experience

of these thousands of robots

from simulation and download this

into a real robotic system.

The test we're going to do today

is to see if it can adapt to new terrains.

[tense music plays]

[tense music continues]

When the robot was going over foam,

the feet movements

were stomping on the ground.

Versus when it came on this poly surface,

it was trying to adjust the motion,

so it doesn't slip.

Then that is when it strikes you,

"This is what machine learning

is bringing to the table."

[cryptic music plays]

[whimpering]

We think the Robot Dog

could be really helpful

in disaster response scenarios

where you need to navigate

many different kinds of terrain.

Or putting these dogs to do surveillance

in harsh environments.

[cryptic music continues]

[music crescendos]

But most technology

runs into the challenge

that there is some good they can do,

and there's some bad.

[grim music plays]

For example,

we can use nuclear technology for energy...

but we also could develop atom bombs

which are really bad.

This is what is known

as the dual-use problem.

Fire is dual-use.

Human intelligence is dual-use.

So, needless to say,

artificial intelligence is also dual-use.

It's really important

to think about AI used in context

because, yes, it's terrific

to have a search-and-rescue robot

that can help locate somebody

after an avalanche,

but that same robot can be weaponized.

[music builds]

[g*nshots]

[Pulkit] When you see companies

using robotics

for putting armed weapons on them,

a part of you becomes mad.

And a part of it is the realization

that when we put our technology,

this is what's going to happen.

[woman 4] This is a real

transformative technology.

[grim music continues]

These are w*apon systems

that could actually change

our safety and security in a dramatic way.

[music crescendos]

As of now, we are not sure

that machines can actually make

the distinction

between civilians and combatants.

[somber music plays]

[indistinct voices]

[Paul] Early in the w*r in Afghanistan,

I was part of an Army Ranger sn*per team

looking for enemy fighters

coming across the border.

And they sent a little girl

to scout out our position.

One thing that never came up

was the idea of sh**ting this girl.

[children squealing]

Under the laws of w*r,

that would have been legal.

[indistinct chatter]

They don't set an age

for enemy combatants.

If you built a robot

to comply perfectly with the law of w*r,

it would have sh*t this little girl.

How would a robot know the difference

between what's legal and what is right?

[indistinct chatter]

[beeping]

[man 12] When it comes to

autonomous drone warfare,

they wanna take away the harm

that it places on American soldiers

and the American psyche,

uh, but the increase on civilian harm

ends up with Afghans,

and Iraqis, and Somalians.

[somber music plays]

[indistinct chatter]

I would really ask those who support

trusting AI to be used in drones,

"What if your village was

on the receiving end of that?"

[beeping]

[man 13] AI is a dual-edged sword.

It can be used for good,

which is what we'd use it for ordinarily,

and at the flip of a switch,

the technology becomes potentially

something that could be lethal.

[thunder rumbles]

[cryptic music plays]

I'm a clinical pharmacologist.

I have a team of people

that are using artificial intelligence

to figure out dr*gs

that will cure diseases

that are not getting any attention.

It used to be with drug discoveries,

you would take a molecule that existed,

and do a tweak to that

to get to a new drug.

And now we've developed AI

that can feed us with millions of ideas,

millions of molecules,

and that opens up so many possibilities

for treating diseases

we've never been able to treat previously.

But there's definitely a dark side

that I never have thought

that I would go to.

[tense music plays]

This whole thing started

when I was invited

by an organization out of Switzerland

called the Spiez Laboratory

to give a presentation

on the potential misuse of AI.

[music intensifies]

[man 14] Sean just sent me an email

with a few ideas

of some ways we could misuse

our own artificial intelligence.

And instead of asking our model

to create drug-like molecules,

that could be used to treat diseases,

let's see if we can generate

the most toxic molecules possible.

[grim music plays]

[Sean] I wanted to make the point,

could we use AI technology

to design molecules that were deadly?

[Fabio] And to be honest,

we thought it was going to fail

because all we really did

was flip this zero to a one.

And by inverting it,

instead of driving away from toxicity,

now we're driving towards toxicity.

And that's it.

[music intensifies]

While I was home,

the computer was doing the work.

I mean, it was cranking through,

generating thousands of molecules,

and we didn't have to do anything

other than just push "go."

[music crescendos]

[birds chirping]

[Fabio] The next morning,

there was this file on my computer,

and within it

were roughly 40,000 molecules

that were potentially

some of the most toxic molecules

known to humankind.

[grim music plays]

[Sean] The hairs on the back of my neck

stood up on end.

I was blown away.

The computer made

tens of thousands of ideas

for new chemical weapons.

Obviously, we have molecules

that look like and are VX analogs and VX

in the data set.

VX is one of the most potent

chemical weapons in the world.

[reporter 3] New claims from police

that the women seen attacking Kim Jong-nam

in this airport assassination

were using a deadly nerve agent called VX.

[Sean] It can cause death

through asphyxiation.

This is a very potent molecule,

and most of these molecules were predicted

to be even more deadly than VX.

[Fabio] Many of them had never,

as far as we know, been seen before.

And so, when Sean and I realized this,

we're like,

"Oh, what have we done?" [chuckles]

[grim music continues]

[Sean] I quickly realized

that we had opened Pandora's box,

and I said, "Stop.

Don't do anything else. We're done."

"Just make me the slides

that I need for the presentation."

When we did this experiment,

I was thinking, "What's the worst thing

that could possibly happen?"

But now I'm like, "We were naive.

We were totally naive in doing it."

[music intensifies]

The thing that terrifies me the most

is that anyone could do what we did.

All it takes is the flip of a switch.

[somber music plays]

How do we control

this technology before it's used

potentially to do something

that's utterly destructive?

[woman 1] At the heart of the conversation

around artificial intelligence

and how do we choose to use it in society

is a race between the power,

with which we develop technologies,

and the wisdom that we have to govern it.

[somber music continues]

There are the obvious

moral and ethical implications

of the same thing

that powers our smartphones

being entrusted

with the moral decision to take a life.

I work with the Future Of Life Institute,

a community of scientist activists.

We're overall trying to show

that there is this other side

to speeding up and escalating automation.

[Emilia] But we're trying to make sure

that technologies we create

are used in a way

that is safe and ethical.

Let's have conversations

about rules of engagement,

and codes of conduct in using AI

throughout our weapons systems.

Because we are now seeing

"enter the b*ttlefield" technologies

that can be used to k*ll autonomously.

[beeping]

[rocket whistles]

[expl*si*n rumbles]

In 2021, the UN released

a report on the potential use

of a lethal autonomous w*apon

on the b*ttlefield in Libya.

[reporter 4] A UN panel said that a drone

flying in the Libyan civil w*r last year

had been programmed

to att*ck targets autonomously.

[Emilia] If the UN reporting is accurate,

this would be

a watershed moment for humanity.

Because it marks a use case

where an AI made the decision

to take a life, and not a human being.

[dramatic music plays]

[beeping]

[Stacie] You're seeing advanced

autonomous weapons

beginning to be used

in different places around the globe.

There were reports out of Israel.

[dramatic music continues]

Azerbaijan used autonomous systems

to target Armenian air defenses.

[Sean] It can fly around

the b*ttlefield for hours,

looking for things to hit on its own,

and then plow into them

without any kind of human intervention.

[Stacie] And we've seen recently

these different videos

that are posted in Ukraine.

[Paul] It's unclear what mode they might

have been in when they were operating.

Was a human in the loop,

choosing what targets to att*ck,

or was the machine doing that on its own?

But there will certainly

come a point in time,

whether it's already happened in Libya,

Ukraine or elsewhere,

where a machine makes its own decision

about whom to k*ll on the b*ttlefield.

[music crescendos]

[Izumi] Machines exercising

lethal power against humans

without human intervention

is politically unacceptable,

morally repugnant.

Whether the international community

will be sufficient

to govern those challenges

is a big question mark.

[somber music plays]

[Emilia] If we look towards the future,

even just a few years from now,

what the landscape looks like

is very scary,

given that the amount

of capital and human resource

going into making AI more powerful

and using it

for all of these different applications,

is immense.

[tense music plays]

[indistinct chatter]

Oh my God, this guy.

[Brandon] He knows he can't win.

Oh... [muttering]

When I see AI win at different problems,

I find it inspirational.

Going for a little Hail Mary action.

And you can apply those same tactics,

techniques, procedures to real aircraft.

- [friend] Good game.

- [Brandon] All right, good game. [laughs]

[sighs]

It's surprising to me

that people continue to make statements

about what AI can't do. Right?

"Oh, it'll never be able

to b*at a world champion in chess."

[tense classical music plays]

[reporter 5] An IBM computer

has made a comeback

in Game 2 of its match

with world chess champion, Garry Kasparov.

[commentator] Whoa! Kasparov has resigned!

[Kasporov] When I see something

that is well beyond my understanding,

I'm scared. And that was something

well beyond my understanding.

[Brandon] And then people would say,

"It'll never be able to b*at

a world champion in the game of Go."

[Go champion] I believe human intuition

is still too advanced for A.I.

to have caught up.

[tense music continues]

[man 15] Go is one of the most

complicated games anyone can learn

because the number of moves on the board,

when you do the math,

equal more atoms

than there are in the entire universe.

There was a team at Google

called DeepMind,

and they created a program called AlphaGo

to be able to b*at

the world's best players.

[officiator] Wow.

Congratulations to...

- AlphaGo.

- AlphaGo.

A computer program

has just beaten a 9 dan professional.

[Brandon] Then DeepMind chose StarCraft

as kind of their next AI challenge.

StarCraft is perhaps the most popular

real-time strategy game of all time.

AlphaStar became famous

when it started defeating world champions.

[host 1] AlphaStar

absolutely smashing Immortal Arc.

[host 2] Know what?

This is not gonna be a fight

that the pros can win.

It's kind of ridiculous.

[tense classical music continues]

[Brandon] Professional gamers say,

"I would never try that tactic."

"I would never try that strategy.

That's something that's not human."

And that was perhaps,

you know, the "a-ha" moment for me.

[music crescendos]

I came to realize the time is now.

There's an important technology

and an opportunity to make a difference.

[somber music plays]

I only knew the problems that I had faced

as a SEAL in close-quarters combat,

but one of my good friends,

who was an F-18 pilot, told me,

"We have the same problem

in the fighter jet community."

"They are jamming communications."

"There are proliferated

surface-to-air m*ssile sites

that make it too dangerous to operate."

Imagine if we had a fighter jet

that was commanded by an AI.

[host 3] Welcome

to the AlphaDogfights.

We're a couple of minutes away

from this first semifinal.

[Brandon] DARPA, the Defense

Advanced Research Projects Agency,

had seen AlphaGo and AlphaStar,

and so this idea of the AlphaDogfight

competition came to life.

[host 4] It's what you wanna see

your fighter pilots do.

[host 3] This looks like

human dogfighting.

[tense music plays]

[Brandon] Dogfighting is

fighter-on-fighter aircraft going at it.

You can think about it

as a boxing match in the sky.

Maybe people have seen the movie Top g*n.

- [character] Can we outrun these guys?

- [Maverick] Not their missiles and g*ns.

[character] It's a dogfight.

[Brandon] Learning to master dogfighting

can take eight to ten years.

It's an extremely complex challenge

to build AI around.

[dramatic music plays]

[keyboard tapping]

The prior approaches to autonomy

and dogfighting tended to be brittle.

[man 16] We figured machine learning was

probably the way to solve this problem.

At first, the AI knew nothing

about the world in which it was dropped.

It didn't know it was flying

or what dogfighting was.

It didn't know what an F-16 is.

All it knew

was the available actions it could take,

and it would start

to randomly explore those actions.

[colleague] The blue plane's been training

for only a small amount of time.

You can see it wobbling back and forth,

uh, flying very erratically,

generally away from its adversary.

As the fight progresses,

we can see blue is starting

to establish here its game plan.

It's more in a position to sh**t.

Once in a while,

the learning algorithm said,

"Here's a cookie.

Keep doing more of that."

We can take advantage of computer power

and train the agents many times

in parallel.

It's like a basketball team.

Instead of playing the same team

over and over again,

you're traveling the world

playing 512 different teams,

all at the same time.

You can get very good, very fast.

We were able to run that simulation 24/7

and get something like 30 years

of pilot training time in, in 10 months.

[music builds]

We went from barely able

to control the aircraft

to being a stone-cold assassin.

Under training, we were competing only

against other artificial intelligence.

But competing against humans directly

was kind of the ultimate target.

My name is Mike Benitez,

I'm a Lieutenant Colonel

in the U.S. Air Force.

Been on active duty about 25 years.

I've got 250 combat missions

and I'm a weapons school graduate,

which is Air Force version of Top g*n.

I've never actually flown against AI.

So I'm pretty excited

to see how well I can do.

[commander] We got now a 6,000 feet

offensive set up nose-to-nose.

Fight's on.

[tense music plays]

[air rushing]

[machine g*n fire]

He's gone now.

Yeah, that's actually really interesting.

[muttering indistinctly]

[machine g*n fire]

Dead. Got him. Flawless victory.

[chuckling]

All right, round two.

[tense music continues]

[machine g*n fire]

What the artificial intelligence is doing

is maneuvering with such precision,

uh, that I just can't keep up with it.

[air rushing]

[machine g*n fire]

[air rushing]

Right into the merge.

Oh, now you're gone.

[machine g*n fire]

Got him!

Still got me.

[laughing]

[Brett] AI is never scared.

There's a human emotional element

in the cockpit an AI won't have.

[music rises]

One of the more interesting strategies

our AI developed,

was what we call the face sh*t.

Usually a human wants to sh**t from behind

because it's hard for them

to shake you loose.

They don't try face sh*ts

because you're playing a game of chicken.

[beeping]

When we come head-on,

3,000 feet away to 500 feet away

can happen in a blink of an eye.

You run a high risk of colliding,

so humans don't try it.

[high-pitched tone]

The AI, unless it's told to fear death,

will not fear death.

[machine g*n fire]

[Mike] All good. Feels like

I'm fighting against a human, uh,

a human that has a reckless abandonment

for safety. [chuckles]

He's not gonna survive this last one.

[air rushing]

[tense music continues]

[wind whistling]

He doesn't have enough time.

Ah!

[Brett] Good night.

[beeping]

[Mike] I'm dead.

It's humbling to know

that I might not even be

the best thing for this mission,

and that thing could be something

that replaces me one day.

[colleague] Same 6 CAV.

One thousand offset.

[Brandon] With this AI pilot

commanding fighter aircraft,

the winning is relentless, it's dominant.

It's not just winning by a wide range.

It's, "Okay, how can we get that

onto our aircraft?" It's that powerful.

[melodic music plays]

[Nathan] It's realistic to expect

that AI will be piloting an F-16,

and it will not be that far out.

[Brandon] If you're going up against

an AI pilot that has a 99.99999% win rate,

you don't stand a chance.

[tense music plays]

When I think about

one AI pilot being unbeatable,

I think about what a team of 50, or 100,

or 1,000 AI pilots

can continue to, uh, achieve.

Swarming is a team

of highly intelligent aircraft

that work with each other.

They're sharing information about

what to do, how to solve a problem.

[beeping]

Swarming will be a game-changing

and transformational capability

to our m*llitary and our allies.

[crickets chirping]

[dramatic music plays]

[music intensifies]

[muffled radio music]

[tense music plays]

[wind whistling]

[buzzing]

[truck accelerating]

[music builds]

[controller] Target has been acquired,

and the drones are tracking him.

[music crescendos]

[controller] Here comes the land.

[man 17] Primary goal

of the swarming research we're working on

is to deploy a large number of drones

over an area that is hard to get to

or dangerous to get to.

[cryptic music plays]

[buzzing]

The Army Research Lab has been supporting

this particular research project.

If you want to know what's in a location,

but it's hard to get to that area,

or it's a very large area,

then deploying a swarm

is a very natural way

to extend the reach of individuals

and collect information

that is critical to the mission.

[music intensifies]

So, right now in our swarm deployment,

we essentially give a single command

to go track the target of interest.

Then the drones go

and do all of that on their own.

Artificial intelligence allows

the robots to move collectively as a swarm

in a decentralized manner.

[melodic music plays]

In the swarms in nature that we see,

there's no boss,

no main animal telling them what to do.

The behavior is emerging

out of each individual animal

following a few simple rules.

And out of that grows this emergent

collective behavior that you see.

[buzzing]

What's awe-inspiring

about swarms in nature

is the graceful ability

in which they move.

It's as if they were built

to be a part of this group.

Ideally, what we'd love to see

with our drone swarm is,

much like in the swarm in nature,

decisions being made

by the group collectively.

[melodic music continues]

The other piece of inspiration for us

comes in the form

of reliability and resiliency.

That swarm will not go down

if one individual animal

doesn't do what it's supposed to do.

[buzzing]

Even if one of the agents falls out,

or fails,

or isn't able to complete the task,

the swarm will continue.

And ultimately,

that's what we'd like to have.

We have this need in combat scenarios

for identifying enemy aircraft,

and it used to be we required

one person controlling one robot.

As autonomy increases,

I hope we will get to see

a large number of robots

being controlled

by a very small number of people.

I see no reason why we couldn't achieve

a thousand eventually

because each agent

will be able to act of its own accord,

and the sky's the limit.

We can scale our learning...

[Justin B.] We've been working on swarming

in simulation for quite some time,

and it is time to bring

that to real-world aircraft.

We expect to be doing

three robots at once over the network,

and then starting

to add more and more capabilities.

We want to be able

to test that on smaller systems,

but take those same concepts

and apply them to larger systems,

like a fighter jet.

[Brandon] We talk a lot about,

how do you give a platoon

the combat power of a battalion?

[dramatic music plays]

Or a battalion

the combat power of a brigade?

You can do that with swarming.

And when you can unlock

that power of swarming,

you have just created

a new strategic deterrence

to m*llitary aggression.

[dramatic music continues]

[Bob] I think the most exciting thing

is the number of young men and women

who we will save

if we really do this right.

And we trade machines

rather than human lives.

Some argue that autonomous weapons

will make warfare more precise

and more humane,

but it's actually difficult to predict

exactly how autonomous weapons

might change warfare ahead of time.

It's like the invention

of the Gatling g*n.

[grim music plays]

Richard Gatling was an inventor,

and he saw soldiers coming back,

wounded in the Civil w*r,

and wanted to find ways

to make warfare more humane.

To reduce the number of soldiers

that were k*lled in w*r

by reducing the number

of soldiers in the battle.

[music builds]

And so he invented the Gatling g*n,

an automated g*n turned by a crank

that could automate the process of f*ring.

It increased effectively by a hundredfold

the firepower that soldiers could deliver.

[grim music continues]

Oftentimes, efforts to make warfare

more precise and humane...

[tense music plays]

...can have the opposite effect.

[Arash] Think about the effect

of one errant drone strike

in a rural area

that drives the local populace

against the United States,

against the local government.

You know, supposedly the good guys.

[tense music continues]

Now magnify that by 1,000.

[Emilia] The creation of a w*apon system

that is cheap, scalable,

and doesn't require human operators

drastically changes

the actual barriers to conflict.

It keeps me up at night to think

of a world where w*r is ubiquitous,

and we no longer carry

the human and financial cost of w*r

because we're just so far removed from...

the lives that will be lost.

[somber melodic music plays]

[Sean] This whole thing is haunting me.

I just needed an example

of artificial intelligence misuse.

The unanticipated consequences

of doing that simple thought experiment

have gone way too far.

When I gave the presentation

on the toxic molecules

created by AI technology,

the audience's jaws dropped.

[Fabio] The next decision was whether

we should publish this information.

On one hand, you want to warn the world

of these sorts of capabilities,

but on the other hand,

you don't want to give somebody the idea

if they had never had it before.

We decided it was worth publishing

to maybe find some ways

to mitigate the misuse of this type of AI

before it occurs.

[grim music plays]

The general public's reaction

was shocking.

[tense music plays]

We can see the metrics on the page,

how many people have accessed it.

The kinds of articles we normally write,

we're lucky if we get...

A few thousand people look at our article

over a period of a year or multiple years.

It was 10,000 people

had read it within a week.

Then it was 20,000,

then it was 30,000, then it was 40,000,

and we were up to 10,000 people a day.

[Sean] We've done The Economist,

the Financial Times.

Radiolab, you know, they reached out.

Like, I've heard of Radiolab!

[music crescendos]

But then the reactions turned

into this thing that's out of control.

[tense music continues]

When we look at those tweets, it's like,

"Oh my God, could they do anything worse?"

Why did they do this?

[music crescendos]

And then we got an invitation

I never would have anticipated.

[dramatic music plays]

There was a lot of discussion

inside the White House about the article,

and they wanted to talk to us urgently.

Obviously, it's an incredible honor

to be able

to talk to people at this level.

But then it hits you

like, "Oh my goodness,

it's the White House. The boss."

This involved putting together

data sets that were open source...

And in about six hours, the model was able

to generate about over 40,000...

[Sean] They asked questions

about how much computing power you needed,

and we told them it was nothing special.

Literally a standard run-of-the-mill,

six-year-old Mac.

And that blew them away.

[dramatic music continues]

The folks that are in charge

of understanding chemical warfare agents

and governmental agencies,

they had no idea of this potential.

[music intensifies]

We've got this cookbook

to make these chemical weapons,

and in the hands of a bad actor

that has malicious intent

it could be utterly horrifying.

[grim music plays]

People have to sit up and listen,

and we have to take steps

to either regulate the technology

or constrain it in a way

that it can't be misused.

Because the potential for lethality...

is terrifying.

The question of the ethics of AI

is largely addressed by society,

not by the engineers or technologists,

the mathematicians.

[beeping]

Every technology that we bring forth,

every novel innovation,

ultimately falls under the purview

of how society believes we should use it.

[somber music plays]

[Bob] Right now,

the Department of Defense says,

"The only thing that is saying

we are going to k*ll something

on the b*ttlefield is a human."

A machine can do the k*lling,

but only at the behest

of a human operator,

and I don't see that ever changing.

[somber music continues]

[Arash] They assure us

that this type of technology will be safe.

But the United States m*llitary just

doesn't have a trustworthy reputation

with drone warfare.

And so, when it comes

to trusting the U.S. m*llitary with AI,

I would say, you know, the track record

kinda speaks for itself.

[Paul] The U.S. Defense Department policy

on the use of autonomy in weapons

does not ban any kind of w*apon system.

And even if militaries

might not want autonomous weapons,

we could see militaries handing over

more decisions to machines

just to keep pace with competitors.

And that could drive militaries

to automate decisions

that they may not want to.

[Bob] Vladimir Putin said,

"Whoever leads in AI

is going to rule the world."

President Xi has made it clear that AI

is one of the number one technologies

that China wants to dominate in.

We're clearly

in a technological competition.

[Brandon] You hear people talk

about guardrails,

and I believe

that is what people should be doing.

But there is a very real race

for AI superiority.

[birds chirping]

And our adversaries, whether it's China,

whether it's Russia, whether it's Iran,

are not going to give two thoughts

to what our policy says around AI.

[birds singing]

You're seeing a lot more conversations

around AI policy,

but I wish more leaders

would have the conversation

saying, ''How quickly

can we build this thing?

Let's resource the heck out of it

and build it."

[dramatic music plays]

[horns honking]

[indistinct chatter]

[inaudible conversation]

[beeping]

We are at the Association of the U.S.

Army's biggest trade show of the year.

Basically, any vendor who is selling

a product or technology into a m*llitary

will be exhibiting.

[man 18] The Tyndall Air Force Base

has four of our robots

that patrol their base

24 hours a day, 7 days a week.

We can add everything from cameras

to sensors to whatever you need.

Manipulator arms. Again, just to complete

the mission that the customer has in mind.

[man 19] What if your enemy introduces AI?

A fighting system that thinks

faster than you, responds faster,

than what a human being can do?

We've got to be prepared.

We train our systems

to collect intel on the enemy,

managing enemy targets

with humans supervising the k*ll chain.

[music crescendos]

- [Brandon] Hi, General.

- How you doing?

[Brandon] Good, sir. How are you? Um...

I'll just say

no one is investing more in an AI pilot.

Our AI pilot's called Hivemind,

so we applied it to our quadcopter Nova.

It goes inside buildings,

explores them

ahead of special operation forces

and infantry forces.

We're applying Hivemind to V-BAT,

so I think about, you know,

putting up hundreds of those teams.

Whether it's the Taiwan Strait,

whether it's in the Ukraine,

deterring our adversaries.

So, pretty excited about it.

- All right. Thank you.

- So. Thank you, General.

[indistinct chatter]

AI pilots should be ubiquitous,

and that should be the case by 2025, 2030.

Its adoption will be rapid

throughout militaries across the world.

[inaudible chatter]

What do you do with the Romanian m*llitary,

their UAS guy?

[soldier] High-tech in the m*llitary.

We've spent half a billion dollars to date

on building an AI pilot.

We will spend another billion dollars

over the next five years. And that is...

It's a major reason why we're winning

the programs of record in the U.S.

Nice.

I mean, it's impressive.

You succeeded to weaponize that.

Uh, it is... This is not weaponized yet.

So not yet. But yes, in the future.

Our customers think about it as a truck.

We think of it as an intelligent truck

that can do a lot of different things.

Thank you, buddy.

[Brandon] I'll make sure

to follow up with you.

[inaudible chatter]

If you come back in 10 years,

you'll see that, um...

AI and autonomy

will have dominated this entire market.

[cryptic music plays]

[Nathan] Forces that are supported

by AI and autonomy

will absolutely dominate,

crush, and destroy forces without.

It'll be the equivalent

of horses going up against tanks,

people with swords

going up against the machine g*n.

It will not even be close.

[Brandon] It will become ubiquitous,

used at every spectrum of warfare,

the tactical level,

the strategic level,

operating at speeds

that humans cannot fathom today.

[Paul] Commanders are already overwhelmed

with too much information.

Imagery from satellites,

and drones, and sensors.

One of the things AI can do

is help a commander

more rapidly understand what is occurring.

And then, "What are the decisions

I need to make?"

[Bob] Artificial intelligence

will take into account all the factors

that determine the way w*r is fought,

come up with strategies...

and give recommendations

on how to win a battle.

[cryptic music continues]

[birds chirping]

[man 20] We at Lockheed Martin,

like our Department of Defense customer,

view artificial intelligence

as a key technology enabler

for command and control.

[analyst 1] The rate of spread

has an average of two feet per second.

[analyst 2] This perimeter

is roughly 700 acres.

[man 20] The fog of w*r is

a reality for us on the defense side,

but it has parallels

to being in the environment

and having to make decisions

for wildfires as well.

[analyst 1] The Washburn fire

is just north of the city of Wawona.

[man 20] You're having to make decisions

with imperfect data.

And so how do we have AI help us

with that fog of w*r?

Wildfires are very chaotic.

They're very complex,

and so we're working

to utilize artificial intelligence

to help make decisions.

[somber melodic music plays]

The Cognitive Mission Manager

is a program we're building

that takes aerial infrared video

and then processes it

through our AI algorithms

to be able to predict

the future state of the fire.

[music intensifies]

[man 21] As we move into the future,

the Cognitive Mission Manager

will use simulation,

running scenarios

over thousands of cycles,

to recommend the most effective way

to deploy resources

to suppress high-priority areas of fire.

[tense music plays]

It'll say, "Go perform an aerial

suppression with a Firehawk here."

"Take ground crews that clear brush...

[sirens wail]

...firefighters that are hosing down,

and deploy them

into the highest priority areas."

[music intensifies]

Those decisions

will be able to be generated faster

and more efficiently.

[Justin T.] We view AI

as uniquely allowing our humans

to be able to keep up

with the ever-changing environment.

[grim music plays]

And there are

a credible number of parallels

to what we're used to at Lockheed Martin

on the defense side.

[Emilia] The m*llitary is no longer

talking about just using AI

in individual weapons systems

to make targeting and k*ll decisions,

but integrating AI

into the whole decision-making

architecture of the m*llitary.

[beeping]

[Bob] The Army has a big project

called Project Convergence.

The Navy has Overmatch.

And the Air Force has

Advanced Battle Management System.

[beeping]

The Department of Defense

is trying to figure out,

"How do we put all these pieces together,

so that we can operate

faster than our adversary

and really gain an advantage?"

[Stacie] An AI Battle Manager would be

like a fairly high-ranking General

who's in charge of the battle.

[grim music continues]

Helping to give orders

to large numbers of forces,

coordinating the actions

of all of the weapons that are out there,

and doing it at a speed

that no human could keep up with.

[music intensifies]

[Emilia] We've spent the past 70 years

building the most sophisticated m*llitary

on the planet,

and now we're facing the decision

as to whether we want to cede control

over that infrastructure to an algorithm,

to software.

[indistinct chatter]

And the consequences of that decision

could trigger the full weight

of our m*llitary arsenals.

That's not one Hiroshima. That's hundreds.

[music crescendos]

This is the time that we need to act

because the window to actually

contain this risk is rapidly closing.

[melodic music plays]

[UN chair] This afternoon, we start

with international security challenges

posed by emerging technologies

in the area of lethal

autonomous weapons systems.

[Emilia] Conversations are happening

within the United Nations

about the thr*at

of lethal autonomous weapons

and our prohibition on systems

that use AI to select and target people.

Consensus amongst technologists

is clear and resounding.

We are opposed

to autonomous weapons that target humans.

[Izumi] For years, states have actually

discussed this issue

of lethal autonomous w*apon systems.

This is about a common,

shared sense of security.

But of course, it's not easy.

Certain countries,

especially those m*llitary powers,

they want to be ahead of the curve,

so that they will be

ahead of their adversaries.

[Paul] The problem is, everyone

has to agree to get anything done.

There will be

at least one country that objects,

and certainly the United States

and Russia have both made clear

that they are opposed to a treaty

that would ban autonomous weapons.

[Emilia] When we think about the number

of people working to make AI more powerful

that room is very crowded.

When we think about the room of people,

making sure that AI is safe,

that room's much more sparsely populated.

[chuckles]

But I'm also really optimistic.

[melodic music plays]

I look at something

like the Biological Weapons Convention,

which happened

in the middle of the Cold w*r...

[crowd cheers]

...despite tensions between the Soviet Union

and the United States.

They were able to realize

that the development of biological weapons

was in neither of their best interests,

and not in the best interests

of the world at large.

[music intensifies]

Arms race dynamics

favor speed over safety.

[beeping]

But I think

what's important to consider is,

at some point,

the cost of moving fast becomes too high.

[beeping]

[Sean] We can't just develop things

in isolation

and put them out there without any thought

of where they could go in the future.

We've got to prevent

that atom b*mb moment.

[music intensifies]

[Brandon] The stakes in the AI race

are massive.

I don't think a lot of people

appreciate the global stability

that has been provided

by having a superior m*llitary force

for the past 75 years.

[beeping]

And so the United States

and our allied forces,

they need to outperform adversarial AI.

[music crescendos]

There is no second place in w*r.

[reporter 6] China laid out

an ambitious plan

to be the world leader in AI by 2030.

It's a race that some say America

is losing...

[tense music plays]

[official] He will accelerate

the adoption of artificial intelligence

to ensure

our competitive m*llitary advantage.

[beeping]

[music intensifies]

[Paul] We are racing forward

with this technology.

I think what's unclear

is how far are we going to go?

Do we control technology,

or does it control us?

[Emilia] There's really no opportunity

for do-overs.

Once the genie is out of the bottle,

it is out.

And it is very, very difficult

to put it back in.

[beeping]

[Sean] And if we don't act now,

it's too late.

[beeping]

It may already be too late.

[beeping]

[music crescendos]

[somber melodic music plays]

[harmonic tones chime]
Post Reply