Category Archives: Drones

UK directing hideous Yemen civilian bombing campaign. Is Parliament not interested?

U.S. and U.K. Continue to Participate in War Crimes, Targeting of Yemeni Civilians

By Glenn Greenwald

From the start of the hideous Saudi bombing campaign against Yemen 18 months ago, two countries have played active, vital roles in enabling the carnage: the U.S. and U.K. The atrocities committed by the Saudis would have been impossible without their steadfast, aggressive support.

yemen_0421

The Obama administration “has offered to sell $115 billion worth of weapons to Saudi Arabia over its eight years in office, more than any previous U.S. administration,” as The Guardian reported this week, and also provides extensive surveillance technology. As The Intercept documented in April, “In his first five years as president, Obama sold $30 billion more in weapons than President Bush did during his entire eight years as commander in chief.”

Most important, according to the Saudi foreign minister, although it is the Saudis who have ultimate authority to choose targets, “British and American military officials are in the command and control center for Saudi airstrikes on Yemen” and “have access to lists of targets.” In sum, while this bombing campaign is invariably described in Western media outlets as “Saudi-led,” the U.S. and U.K. are both central, indispensable participants. As the New York Times editorial page put it in August: “The United States is complicit in this carnage,” while The Guardian editorialized that“Britain bears much responsibility for this suffering.”

From the start, the U.S.- and U.K.-backed Saudis have indiscriminately and at times deliberately bombed civilians, killing thousands of innocent people. From Yemen, Iona Craig and Alex Potter have reported extensively for The Intercept on the widespread civilian deaths caused by this bombing campaign. As the Saudis continued to recklessly and intentionally bomb civilians, the American and British weapons kept pouring into Riyadh, ensuring that the civilian massacres continued. Every once and awhile, when a particularly gruesome mass killing made its way into the news, Obama and various British officials would issue cursory, obligatory statements expressing “concern,” then go right back to fueling the attacks.

This weekend, as American attention was devoted almost exclusively to Donald Trump, one of the most revolting massacres took place. On Saturday,warplanes attacked a funeral gathering in Sana, repeatedly bombing the hall where it took place, killing over 100 people and wounding more than 500 (see photo above). Video shows just some of the destruction and carnage:

Video shows double tap Saudi airstrike on funeral hall in Sanaa, #Yemen, today. Hundreds killed or wounded. Saudis deny, no word from US.pic.twitter.com/6TYlQWPrCN

— Samuel Oakford (@samueloakford) October 8, 2016

Saudi officials first lied by trying to blame “other causes” but have since walked that back. The next time someone who identifies with the Muslim world attacks American or British citizens, and those countries’ leading political voices answer the question “why, oh why, do they hate us?” by assuring everyone that “they hate us for our freedoms,” it would be instructive to watch that video.

The Obama White House, through its spokesperson Ned Price, condemned what it called “the troubling series of attacks striking Yemeni civilians” — attacks, it did not note, it has repeatedly supported — and lamely warned that “U.S. security cooperation with Saudi Arabia is not a blank check.” That is exactly what it is. The 18 months of bombing supported by the U.S. and U.K. has, as the NYT put it this morning, “largely failed, while reports of civilian deaths have grown common, and much of the country is on the brink of famine.”

It has been known from the start that the Saudi bombing campaign has been indiscriminate and reckless, and yet Obama and the U.K. government continued to play central roles. A U.N. report obtained in January by The Guardian “uncovered ‘widespread and systematic’ attacks on civilian targets in violation of international humanitarian law”; the report found that “the coalition had conducted airstrikes targeting civilians and civilian objects, in violation of international humanitarian law, including camps for internally displaced persons and refugees; civilian gatherings, including weddings; civilian vehicles, including buses; civilian residential areas; medical facilities; schools; mosques; markets, factories and food storage warehouses; and other essential civilian infrastructure.”

But what was not known, until an excellent Reuters report by Warren Strobel and Jonathan Landay this morning, is that Obama was explicitly warned not only that the Saudis were committing war crimes, but that the U.S. itself could be legally regarded as complicit in them:

The Obama administration went ahead with a $1.3 billion arms sale to Saudi Arabia last year despite warnings from some officials that the United States could be implicated in war crimes for supporting a Saudi-led air campaign in Yemen that has killed thousands of civilians, according to government documents and the accounts of current and former officials.

State Department officials also were privately skeptical of the Saudi military’s ability to target Houthi militants without killing civilians and destroying “critical infrastructure” needed for Yemen to recover, according to the emails and other records obtained by Reuters and interviews with nearly a dozen officials with knowledge of those discussions.

In other words, the 2009 Nobel Peace Prize winner was explicitly advised that he might be a collaborator in war crimes by arming a campaign that deliberately targets civilians, and continued to provide record-breaking amounts of arms to aid their prosecution. None of that should be surprising: It would be difficult for Obama to condemn “double-tap” strikes of the kind the Saudis just perpetrated — where first responders or mourners are targeted — given that he himself has used that tactic, commonly described as a hallmark of “terrorism.” For their part, the British blocked EU inquiries into whether war crimes were being committed in Yemen, while key MPs have blocked reports proving that U.K. weapons were being used in the commission of war crimes and the deliberate targeting of civilians.

The U.S. and U.K. are the two leading countries when it comes to cynically exploiting human rights concerns and the laws of war to attack their adversaries. They and their leading columnists love to issue pretty, self-righteous speeches about how other nations — those primitive, evil ones over there — target civilians and commit war crimes. Yet here they both are, standing firmly behind one of the planet’s most brutal and repressive regimes, arming it to the teeth with the full and undeniable knowledge that they are enabling massacres that recklessly, and in many cases, deliberately, target civilians.

And these 18 months of atrocities have barely merited a mention in the U.S. election, despite the key role the leading candidate, Hillary Clinton, has played in arming the Saudis, to say nothing of the millions of dollars her family’s foundation has received from its regime (her opponent, Donald Trump, has barely uttered a word about the issue, and himself has received millions in profits from various Saudi oligarchs).

One reason American and British political and media elites love to wax eloquently when condemning the brutality of the enemies of their own government is because doing so advances tribal, nationalistic ends: It’s a strategy for weakening adversaries while strengthening their own governments. But at least as significant a motive is that issuing such condemnations distracts attention from their own war crimes and massacres, the ones they are enabling and supporting.

There are some nations on the planet with credibility to condemn war crimes and the deliberate targeting of civilians. The two countries who have spent close to two years arming Saudi Arabia in its ongoing slaughter of Yemeni civilians are most certainly not among them.

October 11, 2016 “Information Clearing House” – “The Intercept” –

 

UK directing hideous Yemen civilian bombing campaign. Is Parliament not interested?

What Could Go Wrong? US Unveils Artificially Intelligent Fighter Pilot

 

ai-fighter-pilots

The two most aggressive military forces in the world have added a new frontier in their immense ability to deal death and destruction. In the same week, an Israeli firm launched the first-ever torpedo from an unmanned sea vessel while a U.S. artificially intelligent fighter pilot easily won combat simulations against human pilots.

These achievements are a testament to the sad reality that military interests are often the first to take advantage of wondrous advancements such as AI, just as nuclear physics and other technologies were hijacked for more efficient methods of killing.

Stephen Hawking pointed this out during an interview on the Larry King show.

Governments seem to be engaged in an AI arms race, designing planes and weapons with intelligent technologies. The funding for projects directly beneficial to the human race, such as improved medical screening seems a somewhat lower priority.

The AI fighter jet pilot, known as Alpha, was developed by researchers from the University of Cincinnati and defense company Psibernetix. It used four virtual jets to defend a coastline from two attacking planes with superior weapons systems—without suffering any losses.

Retired US Air Force colonel Gene Lee was shot out of the air every time after protracted engagements, and could not even manage to score a hit on Alpha.

The groundbreaking feat was accomplished through the use of “fuzzy logic” to efficiently compute the massive amounts of data from a simulated fighter jet. Instead of analyzing every bit of data equally, fuzzy logic assigns a degree of truth or significance to the pieces of data before making a broader decision.

“Here, you’ve got an AI system that seems to be able to deal with the air-to-air environment, which is extraordinarily dynamic, has an extraordinary number of parameters and, in the paper, more than holds its own against a skilled and capable, experienced combat pilot,” said Doug Barrie, a military aerospace analyst at think tank IISS.

“It’s like a chess master losing out to a computer.”

For now, the talk is about using Alpha “as a simulation tool or as a device to help develop better systems for assisting human pilots in the air.” But it’s a safe assumption that using AI to pilot real machines is being explored by a military machine incessantly hungry for the next best means of reaping death and destruction.

If an AI fighter pilot were ever to fly an actual fighter jet, the obvious question is, what happens when it decides to attack a non-military target? Of course, human pilots routinely bomb innocent civilians, but this is “justified” as collateral damage in the pursuit of defeating the bogeyman du jour.

The unmanned torpedo-launching sea vessel system, called Seagull, will soon be put into use by the Israeli Navy for use against submarines and sea mines. It consists of one or two surface vessels, each about 40 feet long, operated remotely from manned ships or the shore.

One vessel carries a sonar system that can search the entire water volume, with another that deploys an underwater robot for further investigation. When a threat is confirmed, a vessel launches a torpedo-like weapon to destroy the target.

This test carried out in the Haifa port marks the first time that a torpedo has been launched from an unmanned boat.

‘The success of the first torpedo launch test is a major milestone, confirming the Unmanned Surface Vessel’s capability to incorporate weapons that counter submarines, in addition to its unique submarine and mine detection capabilities,’ Elbit, the firm behind the trial said.

While it’s great to “take the man out of the minefield,” the Seagull also represents the kind of military advancement we see with unmanned aerial vehicles, or drones.

As we know, drones have been the tool of choice for expanding undeclared war into countries in the name of fighting terrorism, which has resulted in thousands of innocent civilians being killed. When bombing is carried out remotely, from the comfort of a padded seat in a secure building, the operator is that much more detached from the reality of killing people.

Former drone operators have gone public about the blood lust and indifference that characterizes the drone assassination program, and fellow operators getting intoxicated to “bend that reality and try to picture yourself not being there.”

Of course, operating torpedo-equipped sea vessels is a different beast, but the trend toward unmanned killing machines is nonetheless troubling. With AI being brought into military technology, how long before scenes from the Terminator movies are a real thing?

By Justin Gardner

 

Justin Gardner writes for TheFreeThoughtProject.com, where this article first appeared.

 

What Could Go Wrong? US Unveils Artificially Intelligent Fighter Pilot.

The Pentagon is building a ‘self-aware’ killer robot army fueled by social media — INSURGE intelligence — Medium

Imagine one of these giant robot dog things being weaponized and chasing you through the jungle because you turned up on a Pentagon kill list after posting angry stuff on social media

The Pentagon is building a ‘self-aware’ killer robot army fueled by social media

Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and Instagram

by Nafeez Ahmed

This exclusive is published by INSURGE INTELLIGENCE, a crowd-funded investigative journalism project for the global commons

An unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.

Despite official insistence that humans will retain a “meaningful” degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of “self-aware” interconnected robots to design and execute kill operations against robot-selected targets.

More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.

The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work’s denial that the DoD is planning to develop killer robots.

In a widely reported March conversation with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:

“We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete.”

But, he insisted, “We will not delegate lethal authority to a machine to make a decision,” except for “cyber or electronic warfare.”

He lied.

Official US defence and NATO documents dissected by INSURGE intelligence reveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of “collateral damage.”

Behind public talks, a secret arms race

Efforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.

A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.

In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).

That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.

Prototype Terminator Bots?

Most media outlets have reported the fact that so far, governments have not ruled out the long-term possibility that intelligent robots could be eventually authorized to make decisions to kill human targets autonomously.

But contrary to Robert Work’s claim, active research and development efforts to explore this possibility are already underway. The plans can be gleaned from several unclassified Pentagon documents in the public record that have gone unnoticed, until now.

Among them is a document released in February 2016 from the Pentagon’s Human Systems Community of Interest (HSCOI).

The document shows not only that the Pentagon is actively creating lethal autonomous weapon systems, but that a crucial component of the decision-making process for such robotic systems will include complex Big Data models, one of whose inputs will be public social media posts.

Robots that kill ‘like people’

The HSCOI is a little-known multi-agency research and development network seeded by the Office of the Secretary of Defense (OSD), which acts as a central hub for a huge plethora of science and technology work across US military and intelligence agencies.

The document is a 53-page presentation prepared by HSCOI chair, Dr. John Tangney, who is Director of the Office of Naval Research’s Human and Bioengineered Systems Division. Titled Human Systems Roadmap Review, the slides were presented at the NDIA’s Human Systems Conference in February.

The document says that one of the five “building blocks” of the Human Systems program is to “Network-enable, autonomous weapons hardened to operate in a future Cyber/EW [electronic warfare] Environment.” This would allow for “cooperative weapon concepts in communications-denied environments.”

But then the document goes further, identifying a “focus areas” for science and technology development as “Autonomous Weapons: Systems that can take action, when needed”, along with “Architectures for Autonomous Agents and Synthetic Teammates.”

The final objective is the establishment of “autonomous control of multiple unmanned systems for military operations.”

Such autonomous systems must be capable of selecting and engaging targets by themselves — with human “control” drastically minimized to affirming that the operation remains within the parameters of the Commander’s “intent.”

The document explicitly asserts that these new autonomous weapon systems should be able to respond to threats without human involvement, but in a way that simulates human behavior and cognition.

The DoD’s HSCOI program must “bridge the gap between high fidelity simulations of human cognition in laboratory tasks and complex, dynamic environments.”

Referring to the “Mechanisms of Cognitive Processing” of autonomous systems, the document highlights the need for:

“More robust, valid, and integrated mechanisms that enable constructive agents that truly think and act like people.”

The Pentagon’s ultimate goal is to develop “Autonomous control of multiple weapon systems with fewer personnel” as a “force multiplier.”

The new systems must display “highly reliable autonomous cooperative behavior” to allow “agile and robust mission effectiveness across a wide range of situations, and with the many ambiguities associated with the ‘fog of war.’”

Resurrecting the human terrain

The HSCOI consists of senior officials from the US Army, Navy, Marine Corps, Air Force, Defense Advanced Research Projects Agency (DARPA); and is overseen by the Assistant Secretary of Defense for Research & Engineering and the Assistant Secretary of Defense for Health Affairs.

HSCOI’s work goes well beyond simply creating autonomous weapons systems. An integral part of this is simultaneously advancing human-machine interfaces and predictive analytics.

The latter includes what a HSCOI brochure for the technology industry, ‘Challenges, Opportunities and Future Efforts’, describes as creating “models for socially-based threat prediction” as part of “human activity ISR.”

This is short-hand for intelligence, surveillance and reconnaissance of a population in an ‘area of interest’, by collecting and analyzing data on the behaviors, culture, social structure, networks, relationships, motivation, intent, vulnerabilities, and capabilities of a human group.

The idea, according to the brochure, is to bring together open source data from a wide spectrum, including social media sources, in a single analytical interface that can “display knowledge of beliefs, attitudes and norms that motivate in uncertain environments; use that knowledge to construct courses of action to achieve Commander’s intent and minimize unintended consequences; [and] construct models to allow accurate forecasts of predicted events.”

The Human Systems Roadmap Review document from February 2016 shows that this area of development is a legacy of the Pentagon’s controversial “human terrain” program.

The Human Terrain System (HTS) was a US Army Training and Doctrine Command (TRADOC) program established in 2006, which embedded social scientists in the field to augment counterinsurgency operations in theaters like Iraq and Afghanistan.

The idea was to use social scientists and cultural anthropologists to provide the US military actionable insight into local populations to facilitate operations — in other words, to weaponize social science.

The $725 million program was shut down in September 2014 in the wake of growing controversy over its sheer incompetence.

The HSCOI program that replaces it includes social sciences but the greater emphasis is now on combining them with predictive computational models based on Big Data. The brochure puts the projected budget for the new human systems project at $450 million.

The Pentagon’s Human Systems Roadmap Review demonstrates that far from being eliminated, the HTS paradigm has been upgraded as part of a wider multi-agency program that involves integrating Big Data analytics with human-machine interfaces, and ultimately autonomous weapon systems.

The new science of social media crystal ball gazing

The 2016 human systems roadmap explains that the Pentagon’s “vision” is to use “effective engagement with the dynamic human terrain to make better courses of action and predict human responses to our actions” based on “predictive analytics for multi-source data.”

Are those ‘soldiers’ in the photo human… or are they really humanoid (killer) robots?

In a slide entitled, ‘Exploiting Social Data, Dominating Human Terrain, Effective Engagement,’ the document provides further detail on the Pentagon’s goals:

“Effectively evaluate/engage social influence groups in the op-environment to understand and exploit support, threats, and vulnerabilities throughout the conflict space. Master the new information environment with capability to exploit new data sources rapidly.”

The Pentagon wants to draw on massive repositories of open source data that can support “predictive, autonomous analytics to forecast and mitigate human threats and events.”

This means not just developing “behavioral models that reveal sociocultural uncertainty and mission risk”, but creating “forecast models for novel threats and critical events with 48–72 hour timeframes”, and even establishing technology that will use such data to “provide real-time situation awareness.”

According to the document, “full spectrum social media analysis” is to play a huge role in this modeling, to support “I/W [irregular warfare], information operations, and strategic communications.”

This is broken down further into three core areas:

“Media predictive analytics; Content-based text and video retrieval; Social media exploitation for intel.”

The document refers to the use of social media data to forecast future threats and, on this basis, automatically develop recommendations for a “course of action” (CoA).

Under the title ‘Weak Signal Analysis & Social Network Analysis for Threat Forecasting’, the Pentagon highlights the need to:

“Develop real-time understanding of uncertain context with low-cost tools that are easy to train, reduce analyst workload, and inform COA [course of action] selection/analysis.”

In other words, the human input into the development of course of action “selection/analysis” must be increasingly reduced, and replaced with automated predictive analytical models that draw extensively on social media data.

This can even be used to inform soldiers of real-time threats using augmented reality during operations. The document refers to “Social Media Fusion to alert tactical edge Soldiers” and “Person of Interest recognition and associated relations.”

The idea is to identify potential targets — ‘persons of interest’ — and their networks, in real-time, using social media data as ‘intelligence.’

Meaningful human control without humans

Both the US and British governments are therefore rapidly attempting to redefine “human control” and “human intent” in the context of autonomous systems.

Among the problems that emerged at the UN meetings in April is the tendency to dilute the parameters that would allow describing an autonomous weapon system as being tied to “meaningful” human control.

A separate Pentagon document dated March 2016 — a set of presentation slides for that month’s IEEE Conference on Cognitive Methods in Situation Awareness & Decision Support — insists that DoD policy is to ensure that autonomous systems ultimately operate under human supervision:

“[The] main benefits of autonomous capabilities are to extend and complement human performance, not necessarily provide a direct replacement of humans.”

Unfortunately, there is a ‘but’.

The March document, Autonomous Horizons: System Autonomy in the Air Force, was authored by Dr. Greg Zacharias, Chief Scientist of the US Air Force. The IEEE conference where it was presented was sponsored by two leading government defense contractors, Lockheed Martin and United Technologies Corporation, among other patrons.

Further passages of the document are revealing:

“Autonomous decisions can lead to high-regret actions, especially in uncertain environments.”

In particular, the document observes:

“Some DoD activity, such as force application, will occur in complex, unpredictable, and contested environments. Risk is high.”

The solution, supposedly, is to design machines that basically think, learn and problem solve like humans. An autonomous AI system should “be congruent with the way humans parse the problem” and driven by “aiding/automation knowledge management processes along lines of the way humans solve problem [sic].”

A section titled ‘AFRL [Air Force Research Laboratory] Roadmap for Autonomy’ thus demonstrates how by 2020, the US Air Force envisages “Machine-Assisted Ops compressing the kill chain.” The bottom of the slide reads:

“Decisions at the Speed of Computing.”

This two-staged “kill chain” is broken down as follows: firstly, “Defensive system mgr [manager] IDs threats & recommends actions”; secondly, “Intelligence analytic system fuses INT [intelligence] data & cues analyst of threats.”

In this structure, a lethal autonomous weapon system draws on intelligence data to identify a threat, which an analyst simply “IDs”, before recommending “action.”

The analyst’s role here is simply to authorize the kill, but in reality the essential importance of human control — assessment of the integrity of the kill decision — has been relegated to the end of an entirely automated analytical process, as a mere perfunctionary obligation.

By 2030, the document sees human involvement in this process as being reduced even further to an absolute minimum. While a human operator may be kept “in the loop” (in the document’s words) the Pentagon looks forward to a fully autonomous system consisting of:

“Optimized platform operations delivering integrated ISR [intelligence, surveillance and reconnaissance] and weapon effects.”

The goal, in other words, is a single integrated lethal autonomous weapon system combining full spectrum analysis of all data sources with “weapon effects” — that is, target selection and execution.

The document goes to pains to layer this vision with a sense of human oversight being ever-present.

AI “system self-awareness”

Yet an even more blunt assertion of the Pentagon’s objective is laid out in a third document, a set of slides titled DoD Autonomy Roadmap presented exactly a year earlier at the NDIA’s Defense Tech Expo.

The document authored by Dr. Jon Bornstein, who leads the DoD’s Autonomy Community of Interest (ACOI), begins by framing its contents with the caveat: “Neither Warfighter nor machine is truly autonomous.”

Yet it goes on to call for machine agents to develop:

“Perception, reasoning, and intelligence allow[ing] for entities to have existence, intent, relationships, and understanding in the battle space relative to a mission.”

This will be the foundation for two types of weapon systems: “Human/ Autonomous System Interaction and Collaboration (HASIC)” and “Scalable Teaming of Autonomous Systems (STAS).”

In the near term, machine agents will be able “to evolve behaviors over time based on a complex and ever-changing knowledge base of the battle space… in the context of mission, background knowledge, intent, and sensor information.”

However, it is the Pentagon’s “far term” vision for machine agents as “self-aware” systems that is particularly disturbing:

“Far Term:

•Ontologies adjusted through common-sense knowledge via intuition.

•Learning approaches based on self-exploration and social interactions.

•Shared cognition

•Behavioral stability through self-modification.

•System self-awareness”

It is in this context of the “self-awareness” of an autonomous weapon system that the document clarifies the need for the system to autonomously develop forward decisions for action, namely:

“Autonomous systems that appropriately use internal model-based/deliberative planning approaches and sensing/perception driven actions/control.”

The Pentagon specifically hopes to create what it calls “trusted autonomous systems”, that is, machine agents whose behavior and reasoning can be fully understood, and therefore “trusted” by humans:

“Collaboration means there must be an understanding of and confidence in behaviors and decision making across a range of conditions. Agent transparency enables the human to understand what the agent is doing and why.”

Once again, this is to facilitate a process by which humans are increasingly removed from the nitty gritty of operations.

In the “Mid Term”, there will be “Improved methods for sharing of authority” between humans and machines. In the “Far Term”, this will have evolved to a machine system functioning autonomously on the basis of “Awareness of ‘commanders intent’” and the “use of indirect feedback mechanisms.”

This will finally create the capacity to deploy “Scalable Teaming of Autonomous Systems (STAS)”, free of overt human direction, in which multiple machine agents display “shared perception, intent and execution.”

Teams of autonomous weapon systems will display “Robust self-organization, adaptation, and collaboration”; “Dynamic adaption, ability to self-organize and dynamically restructure”; and “Agent-to-agent collaboration.”

Notice the lack of human collaboration.

The “far term” vision for such “self-aware” autonomous weapon systems is not, as Robert Work claimed, limited to cyber or electronic warfare, but will include:

“Ground Convoys/Air-ground operations”; “Ballistic rate multi-agent operation”; “Smart munitions.”

These operations might even take place in tight urban environments — “in close proximity to other manned & unmanned systems including crowded military & civilian areas.”

The document admits, though, that the Pentagon’s major challenge is to mitigate against unpredictable environments and emergent behavior.

Autonomous systems are “difficult to assure correct behavior in a countless number of environmental conditions” and are “difficult to sufficiently capture and understand all intended and unintended consequences.”

Terminator teams, led by humans

The Autonomy roadmap document clearly confirms that the Pentagon’s final objective is to delegate the bulk of military operations to autonomous machines, capable of inflicting “Collective Defeat of Hard and Deeply Buried Targets.”

One type of machine agent is the “Autonomous Squad Member (Army)”, which “Integrates machine semantic understanding, reasoning, and perception into a ground robotic system”, and displays:

“Early implementation of a goal reasoning model, Goal-Directed Autonomy (GDA) to provide the robot the ability to self-select new goals when it encounters an unanticipated situation.”

Human team members in the squad must be able “to understand an intelligent agent’s intent, performance, future plans and reasoning processes.”

Another type is described under the header, ‘Autonomy for Air Combat Missions Team (AF).’

Such an autonomous air team, the document envisages, “Develops goal-directed reasoning, machine learning and operator interaction techniques to enable management of multiple, team UAVs.” This will achieve:

“Autonomous decision and team learning enable the TBM [Tactical Battle Manager] to maximize team effectiveness and survivability.”

TBM refers directly to a battle management autonomy software for unmanned aircraft.

The Pentagon still, of course, wants to ensure that there remains a human manual override, which the document describes as enabling a human supervisor “to ‘call a play’ or manually control the system.”

Targeting evil antiwar bloggers

Yet the biggest challenge, nowhere acknowledged in any of the documents, is ensuring that automated AI target selection actually selects real threats, rather than generating or pursuing false positives.

According to the Human Systems roadmap document, the Pentagon has already demonstrated extensive AI analytical capabilities in real-time social media analysis, through a NATO live exercise last year.

During the exercise, Trident Juncture — NATO’s largest exercise in a decade — US military personnel “curated over 2M [million] relevant tweets, including information attacks (trolling) and other conflicts in the information space, including 6 months of baseline analysis.” They also “curated and analyzed over 20K [i.e. 20,000] tweets and 700 Instagrams during the exercise.”

The Pentagon document thus emphasizes that the US Army and Navy can now already “provide real-time situation awareness and automated analytics of social media sources with low manning, at affordable cost”, so that military leaders can “rapidly see whole patterns of data flow and critical pieces of data” and therefore “discern actionable information readily.”

The primary contributor to the Trident Juncture social media analysis for NATO, which occurred over two weeks from late October to early November 2015, was a team led by information scientist Professor Nitin Agarwal of the University of Arkansas, Little Rock.

Agarwal’s project was funded by the US Office of Naval Research, Air Force Research Laboratory and Army Research Office, and conducted in collaboration with NATO’s Allied Joint Force Command and NATO Strategic Communications Center of Excellence.

Slides from a conference presentation about the research show that the NATO-backed project attempted to identify a hostile blog network during the exercise containing “anti-NATO and anti-US propaganda.”

Among the top seven blogs identified as key nodes for anti-NATO internet traffic were websites run by Andreas Speck, an antiwar activist; War Resisters International (WRI); and Egyptian democracy campaigner Maikel Nabil Sanad — along with some Spanish language anti-militarism sites.

Andreas Speck is a former staffer at WRI, which is an international network of pacifist NGOs with offices and members in the UK, Western Europe and the US. One of its funders is the Joseph Rowntree Charitable Trust.

The WRI is fundamentally committed to nonviolence, and campaigns against war and militarism in all forms.

Most of the blogs identified by Agarwal’s NATO project are affiliated to the WRI, including for instance nomilservice.com, WRI’s Egyptian affiliate founded by Maikel Nabil, which campaigns against compulsory military service in Egypt. Nabil was nominated for the Nobel Peace Prize and even supported by the White House for his conscientious objection to Egyptian military atrocities.

The NATO project urges:

“These 7 blogs need to be further monitored.”

The project was touted by Agarwal as a great success: it managed to extract 635 identity markers through metadata from the blog network, including 65 email addresses, 3 “persons”, and 67 phone numbers.

This is the same sort of metadata that is routinely used to help identify human targets for drone strikes — the vast majority of whom are not terrorists, but civilians.

Agarwal’s conference slides list three Pentagon-funded tools that his team created for this sort of social media analysis: Blogtracker, Scraawl, and Focal Structures Analysis.

Flagging up an Egyptian democracy activist like Maikel Nabil as a hostile entity promoting anti-NATO and anti-US propaganda demonstrates that when such automated AI tools are applied to war theatres in complex environments (think Pakistan, Afghanistan and Yemen), the potential to identify individuals or groups critical of US policy as terrorism threats is all too real.

This case demonstrates how deeply flawed the Pentagon’s automation ambitions really are. Even with the final input of independent human expert analysts, entirely peaceful pro-democracy campaigners who oppose war are relegated by NATO to the status of potential national security threats requiring further surveillance.

Compressing the kill chain

It’s often assumed that DoD Directive 3000.09 issued in 2012, ‘Autonomy in Weapon Systems’, limits kill decisions to human operators under the following stipulation in clause 4:

“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

After several paragraphs underscoring the necessity of target selection and execution being undertaken under the oversight of a human operator, the Directive goes on to open up the possibility of developing autonomous weapon systems without any human oversight, albeit with the specific approval of senior Pentagon officials:

“Autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets… Autonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside the policies in subparagraphs 4.c.(1) through 4.c.(3) must be approved by the Under Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the CJCS before formal development and again before fielding.”

Rather than prohibiting the development of lethal autonomous weapon systems, the directive simply consolidates all such developments under the explicit authorization of the Pentagon’s top technology chiefs.

Worse, the directive expires on 21st November 2022 — which is around the time such technology is expected to become operational.

Indeed, later that year, Lieutenant Colonel Jeffrey S. Thurnher, a US Army lawyer at the US Naval War College’s International Law Department, published a position paper in the National Defense University publication, Joint Force Quarterly.

If these puppies became self-aware, would they be cuter?

He recommended that there were no substantive legal or ethical obstacles to developing fully autonomous killer robots — as long as such systems are designed in such a way as to maintain a semblance of human oversight through “appropriate control measures.”

In the conclusions to his paper, titled No One At The Controls: Legal Implications of Fully Autonomous Targeting, Thurnher wrote:

“LARs [lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed. Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the United States should fully commit itself to harnessing the potential of fully autonomous targeting. The feared legal concerns do not appear to be an impediment to the development or deployment of LARs. Thus, operational commanders should take the lead in making this emerging technology a true force multiplier for the joint force.”

Lt. Col. Thurnher went on to become a Legal Advisor for NATO Rapid Deployable Corps in Munster, Germany. In this capacity, he was a contributor to a little-known 2014 official policy guidance document for NATO Allied Command Transformation, Autonomy in Defence Systems.

The NATO document, which aims to provide expert legal advice to government policymakers, sets out a position in which the deployment of autonomous weapon systems for lethal combat — in particular the delegation of targeting and kill decisions to machine agents — is viewed as being perfectly legitimate in principle.

It is the responsibility of specific states, the document concludes, to ensure that autonomous systems operate in compliance with international law in practice — a caveat that also applies for the use of autonomous systems for law-enforcement and self-defence.

In the future, though, the NATO document points to the development of autonomous systems that can “reliably determine when foreseen but unintentional harm to civilians is ethically permissible.”

Acknowledging that currently only humans are able to make a “judgement about the ethical permissibility of foreseen but unintentional harm to civilians (collateral damage)”, the NATO policy document urges states developing autonomous weapon systems to ensure that eventually they “are able to integrate with collateral damage estimation methodologies” so as to delegate targeting and kill decisions accordingly.

The NATO position is particularly extraordinary given that international law — such as the Geneva Conventions — defines foreseen deaths of civilians caused by a military action as intentional, precisely because they were foreseen yet actioned anyway.

The Statute of the International Criminal Court (ICC) identifies such actions as “war crimes”, if a justifiable and direct military advantage cannot be demonstrated:

“… making the civilian population or individual civilians, not taking a direct part in hostilities, the object of attack; launching an attack in the knowledge that such attack will cause incidental loss of civilian life, injury to civilians or damage to civilian objects which would be clearly excessive in relation to the concrete and direct military advantage anticipated;… making civilian objects, that is, objects that are not military objectives, the object of attack.”

And customary international law recognizes the following acts as war crimes:

“… launching an indiscriminate attack resulting in loss of life or injury to civilians or damage to civilian objects; launching an attack against works or installations containing dangerous forces in the knowledge that such attack will cause excessive incidental loss of civilian life, injury to civilians or damage to civilian objects.”

In other words, NATO’s official policy guidance on autonomous weapon systems sanitizes the potential for automated war crimes. The document actually encourages states to eventually develop autonomous weapons capable of inflicting “foreseen but unintentional” harm to civilians in the name of securing a ‘legitimate’ military advantage.

Yet the NATO document does not stop there. It even goes so far as to argue that policymakers considering the development of autonomous weapon systems for lethal combat should reflect on the possibility that delegating target and kill decisions to machine agents would minimize civilian casualties.

Skynet, anyone?

A new report by Paul Scharre, who led the Pentagon working group that drafted DoD Directive 3000.09 and now heads up the future warfare program at the Center for New American Security in Washington DC, does not mince words about the potentially “catastrophic” risks of relying on autonomous weapon systems.

“With an autonomous weapon,” he writes, “the damage potential before a human controller is able to intervene could be far greater…

“In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other autonomous weapons of the same type, a military could face the disturbing prospect of large numbers of autonomous weapons failing simultaneously, with potentially catastrophic consequences.”

Scharre points out that “autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces,” due to any number of potential reasons, including “hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors.”

Noting that in the software industry, for every 1,000 lines of code, there are between 15 and 50 errors, Scharre points out that such marginal, routine errors could easily accumulate to create unexpected results that could be missed even by the most stringent testing and validation methods.

The more complex the system, the more difficult it will be to verify and track the system’s behavior under all possible conditions: “… the number of potential interactions within the system and with its environment is simply too large.”

The documents discussed here show that the Pentagon is going to pains to develop ways to mitigate these risks.

But as Scharre concludes, “these risks cannot be eliminated entirely. Complex tightly coupled systems are inherently vulnerable to ‘normal accidents.’ The risk of accidents can be reduced, but never can be entirely eliminated.”

As the trajectory toward AI autonomy and complexity accelerates, so does the risk that autonomous weapon systems will, eventually, wreak havoc.

Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is a weekly columnist for Middle East Eye.

He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work, and was twice selected in the Evening Standard’s top 1,000 most globally influential Londoners, in 2014 and 2015.

Nafeez has also written and reported for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, The Ecologist, Alternet, Counterpunch, Truthout, among others.

He is a Visiting Research Fellow at the Faculty of Science and Technology at Anglia Ruskin University, where he is researching the link between global systemic crises and civil unrest for Springer Energy Briefs.

Nafeez is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel ZERO POINT, among other books. His work on the root causes and covert operations linked to international terrorism officially contributed to the 9/11 Commission and the 7/7 Coroner’s Inquest.


This story is being released for free in the public interest, and was enabled by crowdfunding. I’d like to thank my amazing community of patrons for their support, which gave me the opportunity to work on this story. Please support independent, investigative journalism for the global commons via Patreon.com, where you can donate as much or as little as you like.

 

The Pentagon is building a ‘self-aware’ killer robot army fueled by social media — INSURGE intelligence — Medium.

CISA Is Now The Law: How Congress Quietly Passed The Second Patriot Act

Update: CISA is now the law: OBAMA SIGNS SPENDING, TAX BILL THAT REPEALS OIL EXPORT BAN

* * *

Back in 2014, civil liberties and privacy advocates were up in arms when the government tried to quietly push through the Cybersecurity Information Sharing Act, or CISA, a law which would allow federal agencies – including the NSA – to share cybersecurity, and really any information with private corporations “notwithstanding any other provision of law.” The most vocal complaint involved CISA’s information-sharing channel, which was ostensibly created for responding quickly to hacks and breaches, and which provided a loophole in privacy laws that enabled intelligence and law enforcement surveillance without a warrant.

Ironically, in its earlier version, CISA had drawn the opposition of tech firms including Apple, Twitter, Reddit, as well as the Business Software Alliance, the Computer and Communications Industry Association and many others including countless politicians and, most amusingly, the White House itself.

In April, a coalition of 55 civil liberties groups and security experts signed onto an open letter opposing it. In July, the Department of Homeland Security itself warned that the bill could overwhelm the agency with data of “dubious value” at the same time as it “sweep[s] away privacy protections.” Most notably, the biggest aggregator of online private content, Facebook, vehemently opposed the legislation however a month ago it was “surprisingly” revealed that Zuckerberg had been quietly on the side of the NSA all along as we reported in “Facebook Caught Secretly Lobbying For Privacy-Destroying “Cyber-Security” Bill.” 

Even Snowden chimed in:

Following the blitz response, the push to pass CISA was tabled following a White House threat to veto similar legislation. Then, quietly, CISA reemerged after the same White House mysteriously flip-flopped, expressed its support for precisely the same bill in August.

And then the masks fell off, when it became obvious that not only are corporations eager to pass CISA despite their previous outcry, but that they have both the White House and Congress in their pocket.

As Wired reminds us, when the Senate passed the Cybersecurity Information Sharing Act by a vote of 74 to 21 in October, privacy advocates were again “aghast” that the key portions of the law were left intact which they said make it more amenable to surveillance than actual security, claiming that Congress has quietly stripped out “even more of its remaining privacy protections.”

“They took a bad bill, and they made it worse,” says Robyn Greene, policy counsel for the Open Technology Institute.

But while Congress was preparing a second assault on privacy, it needed a Trojan Horse with which to enact the proposed legislation into law without the public having the ability to reject it.

It found just that by attaching it to the Omnibus $1.1 trillion Spending Bill, which passed the House early this morning, passed the Senate moments ago and will be signed into law by the president in the coming hours.

This is how it happened, again courtesy of Wired:

In a late-night session of Congress, House Speaker Paul Ryan announced a new version of the “omnibus” bill, a massive piece of legislation that deals with much of the federal government’s funding. It now includes a version of CISA as well. Lumping CISA in with the omnibus bill further reduces any chance for debate over its surveillance-friendly provisions, or a White House veto. And the latest version actually chips away even further at the remaining personal information protections that privacy advocates had fought for in the version of the bill that passed the Senate.

It gets: it appears that while CISA was on hiatus, US lawmakers – working under the direction of corporations adnt the NSA – were seeking to weaponize the revised legislation, and as Wired says, the latest version of the bill appended to the omnibus legislation seems to exacerbate the problem of personal information protections.

It creates the ability for the president to set up “portals” for agencies like the FBI and the Office of the Director of National Intelligence, so that companies hand information directly to law enforcement and intelligence agencies instead of to the Department of Homeland Security. And it also changes when information shared for cybersecurity reasons can be used for law enforcement investigations. The earlier bill had only allowed that backchannel use of the data for law enforcement in cases of “imminent threats,” while the new bill requires just a “specific threat,” potentially allowing the search of the data for any specific terms regardless of timeliness.

Some, like Senator Ron Wyden, spoke out out against the changes to the bill in a press statement, writing they’d worsened a bill he already opposed as a surveillance bill in the guise of cybersecurity protections.

Senator Richard Burr, who had introduced the earlier version of bill, didn’t immediately respond to a request for comment.

“Americans deserve policies that protect both their security and their liberty,” he wrote. “This bill fails on both counts.”

Why was the CISA included in the omnibus package, which just passed both the House and the Senate? Because any “nay” votes  – or an Obama – would also threaten the entire budget of the federal government. In other words, it was a question of either Americans keeping their privacy or halting the funding of the US government, in effect bankrupting the nation.

And best of all, the rushed bill means there will be no debate.

The bottom line as OTI’s Robyn Green said, “They’ve got this bill that’s kicked around for years and had been too controversial to pass, so they’ve seen an opportunity to push it through without debate. And they’re taking that opportunity.

The punchline: “They’re kind of pulling a Patriot Act.”

And when Obama signs the $1.1 trillion Spending Bill in a few hours, as he will, it will be official: the second Patriot Act will be the law, and with it what little online privacy US citizens may enjoy, will be gone.

 

 

Source : Zero Hedge.

And Now for the Robot Apocalypse…

 

t800

Well, you can’t blame them for trying, can you?

 

Earlier today the grandiloquently named “Future of Life Institute” (FLI) announced an open letter on the subject of ‘autonomous weapons.’ In case you’re not keeping up with artificial intelligence research, that means weapons that seek and engage targets all by themselves. While this sounds fanciful to the uninformed, it is in fact a dystopian nightmare that, thanks to startling innovations in robotics and artificial intelligence by various DARPA-connected research projects, is fast becoming a reality. Heck, people are already customizing their own multirotor drones to fire handguns; just slap some AI on that and call it Skynet.

 

Indeed, as anyone who has seen Robocop, Terminator, Bladerunner or a billion other sci-fi fantasies will know, gun-wielding, self-directed robots are not to be hailed as just another rung on the ladder of technical progress. But for those who are still confused on this matter, the FLI open letter helpfully elaborates: “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” In other words, instead of “autonomous weapons” we might get the point across more clearly if we just call them for what they are: soulless killing machines. (But then we might risk confusing them with the psychopaths at the RAND Corporation or the psychopaths on the Joint Chiefs of Staff or the psychopaths in the CIA or the psychopaths in the White House…)

 

In order to confront this pending apocalypse, the fearless men and women at the FLI have bravely stepped up to the plate and…written a polite letter to ask governments to think twice before developing these really effective, well-nigh unstoppable super weapons (pretty please). Well, as I say, you can’t blame them for trying, can you?

 

Well, yes. Actually you can. Not only is the letter a futile attempt to stop the psychopaths in charge from developing a better killing implement, it is a deliberate whitewashing of the problem.

 

According to FLI, the idea isn’t scary in and of itself, it isn’t scary because of the documented history of the warmongering politicians in the US and the other NATO countries, it isn’t scary because governments murdering their own citizens was the leading cause of unnatural death in the 20th century. No, it’s scary because “It will only be a matter of time until [autonomous weapons] appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.” If you thought the hysteria over Iran’s nuclear non-weapons program was off the charts, you ain’t seen nothing yet. Just wait till the neo-neocons get to claim that Assad or Putin or the enemy of the week is developing autonomous weapons!

 

singularityIn fact, the FLI doesn’t want to stop the deployment of AI on the battlefield at all. Quite the contrary. “There are many ways in which AI can make battlefields safer for humans” the letter says before adding that “AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” In fact, they’ve helpfully drafted a list of research priorities for study into the field of AI on the assumption that AI will be woven into the fabric of our society in the near future, from driverless cars and robots in the workforce to, yes, autonomous weapons.

 

So who is FLI and who signed this open letter. Oh, just Stephen Hawking, Elon Musk, Nick Bostrom and a host of Silicon Valley royalty and academic bigwigs. Naturally the letter is being plastered all over the media this week in what seems suspiciously like an advertising campaign for the machine takeover, with Bill Gates and Stephen Hawking and Elon Musk having already broached the subject in the past year, as well as the Channel Four drama Humans and a whole host of other cultural programming coming along to subtly indoctrinate us that this robot-dominated future is an inevitability. This includes extensive coverage of this topic in the MSM, including copious reports in outlets like The Guardian telling us how AI is going to merge with the “Internet of Things.” But don’t worry; it’s mostly harmless.

 

…or so they want us to believe. Of course what they don’t want to talk about in great detail is the nightmare vision of the technocractic agenda that these technologies (or their forerunners) are enabling and the transhumanist nightmare that this is ultimately leading us toward. That conversation is reserved for proponents of the Singularity like Ray Kurzweil and any attempts to point out the obvious problems with this idea are poo-pooed as “conspiracy theory.”

 

singularity2And so we have suspect organizations like the “Future of Life Institute” trying to steer the conversation on AI into how we can safely integrate these potentially killer robots into our future society even as the Hollywood programmers go overboard in steeping us in the idea. Meanwhile, those of us in the reality-based community get to watch this grand uncontrolled experiment with the future of our world unfold like the genetic engineering experiment and the geoengineering experiment.

 

What can be done about this AI / transhumanist / technocratic agenda? Is it possible to be derailed? Contained? Stopped altogether? How? Corbett Report members are invited to log in and leave their thoughts in the comment section below.

by James Corbett

 

Source : The Corbett Report.

Crime Fighting Drones to Aid UK Police

In an innovative attempt to combat crime in the United Kingdom, police are seeking to employ Unmanned Aerial Vehicles (UAVS), or drones, to help them on the beat. It’s a move that, perhaps predictably, makes some Brits uneasy.

The Sussex and Surrey police force have been given almost £250,000 by the Home Office to purchase five drones to evaluate how they perform  – specifically, the force wants to know whether they can be used to collect aerial evidence (which may be handy in cases of missing persons,) and investigate areas that could be potentially dangerous to the lives of the officers, the UK Times reported.This is not the first time Sussex and Surrey has experimented with drones. Drones near Gatwick Airport in West Sussex proved in some instances to be faster, safer, and cheaper than humans for perimeter patrols. The force now seeks to look into broader uses for the technology, to advise other divisions considering drone use, and to draw up a training scheme that could allow other officers to properly pilot drones.

The move comes as regulations for recreational drone use in the UK are tightening. A man in Liverpool was arrested earlier in March for filming Premier League football matches with a drone, and police are investigating an attempt to smuggle contraband into Bedford Prison using a UAV. But as police forces across the country are being pressured to cut costs, UAVs seem a likely route. But, as Engadget reports, until technology has progressed enough for drones to make arrests, they’re not likely to fully replace human forces in the near future.

 

Source : Sputnik International.

Already Underway: Smart A.I. Running Our Police and Cities

Increasingly our streets and cities are using Artificial Intelligence (AI) to point police to crime hotspots through CCTV networks.

However CCTV, closed circuit television, is not quite what is operating on our streets today. What we have now is IPTV, an internet protocol television network that can relay images to analytical software that uses algorithms to determine pre-crime area in real time.

Currently this AI looks at areas that may be targeted for crimes such as burglaries or joyriding, with the predicted hotspot information being sent direct to law enforcement smart phones in the field. This analytical software is being used in Glasgow, hailed as Britain’s first ‘smart city’, where the Israeli security firm NICE Systems are running the CCTV/IPTV network, analysing data from the 442 fixed HD surveillance cameras and 30 mobile units under a project called Community Safety Glasgow, whose primary objectives are described as, ‘delivering Glasgow a more efficient traffic management system, identifying crime in the city and tracking individuals.’

control-room-4
SKELETON CREW: With AI, there will be no need for large, fully staffed police surveillance units.

Whilst Glasgow City Council claim they are not currently utilising NICE System’s facial recognition capabilities, the new HD CCTV system being installed for the Future Cities Demonstrator initiative, funded by the Department of Business, Innovation and Skills via its quango the Technology Strategy Board, is still capable of tracking individuals within the city. A spokesperson from Glasgow City Council stated:

“A trial of NICE’s video analytics is planned for later in the year [2015]. This involves ‘Suspect Search’ which can be used to find missing children or vulnerable adults quickly, such as those with dementia, as well as tackling crime. Again it does not involve facial recognition or emotional intelligence.”

As well as missing children and vulnerable adults presumably Suspect Search can also track suspects – the clue is in the name. No facial recognition. No surreptitiously taking and covertly using our biometrics, that’s okay then?  So how does this tracking work? The software still has the same outcome as using facial biometrics – individuals can be identified, traced and tracked. According to NICE;

“Working with information about the entire body, from head to foot (clothes, accessories, skin, hair) enables faster and more accurate matches.”

Of course, because CCTV cameras are not at head height and persons of interest do not always have their face aimed at the camera, it could be the back or top of the head or the particular person could be wearing a cap, therefore analysing the whole body makes sense. It’s still the same outcome as using our biometrics, agencies being able to track us individually, covertly.

SEE ALSO: NeoFace: How Anonymous Are We To Big Brother Police Agencies?

Moving surveillance cameras to a height that facial recognition software can operate seems to be where polices agencies are moving towards. On March 9th Metropolitan Police Commissioner Sir Bernard Hogan-Howe called for surveillance cameras to be moved to head height;

“We‘ve got a strategy to encourage people to do with their cameras, is to move them down to eye level… facial recognition software has got better it means we can apply the software to the images of burglaries or robberies whatever, so we can compare those images with the images we take when we arrest people.”

Cameras previously situated out of reach to stop the public from vandalising them does not appear to be an issue anymore. Clearly, police now believe that the public are sufficiently desensitised to CCTV and thus will not physically interfere with the system. Having cameras head height enables facial recognition software to run behind whatever surveillance system is operating, which is precisely what West Midlands Police intend to run behind Birmingham’s HD CCTV network.

West Midlands Police have a Public Private Partnership (PPP) with multinational corporation Accentureto revolutionise and streamline the way the force handles data, uses mobile and digital technology and interacts with social media and other organisations such as local authorities.” This includes running a facial recognition system called ‘Face in the crowd’ behind what the Association of Chief Police Officers (ACPO) call “the wealth of CCTV footage available” in Birmingham. Apparently ’Face in the Crowd’ is sold to us as a device purely for finding missing persons, much like how ‘Suspect Search’ in Glasgow is primarily for missing children vulnerable adults. All for our safety of course.

Accenture also deliver facial recognition for police body worn cameras. Heading up the Accenture Police Services department is Managing Director Tim Godwin, former Deputy Commissioner of London’s Metropolitan Police Service, his attitude towards this facial recognition used with police body camera is that:

“They [body cameras] are a good thing in my view, it gives you a lot of additional evidence, you have got facial recognition, you can actually link it directly to a case system – so it’s really good.”

How long before West Midlands Police start utilising a wider variety of Accenture’s products such as facial recognition body worn cameras? Would they tell the public if they did anyway?

Maybe they would follow ACPO’s lead of using a facial recognition system since April 2014, accessing the Police National Database using 18 million of our photographs, a system that they have been developing since October 2012, and failing to inform anyone.

Quite where West Midlands Police are up to with their facial recognition technology is unclear, however a Freedom of Information request to the police authority is due back from them by end March 2015, at which time we can analyze their official policy on the technology.

Although local government’s CCTV networks are not routinely hooked into private surveillance networks, the advent of IPTV, where surveillance data no longer being recorded on video tapes for storage but being saved in the ‘cloud’, would presumably create a long term desire for government agencies to be able to have access to these private surveillance networks.

Precrime in Here

With an ever-increasing use of these technological and analytical ‘intelligences’ being used behind what is unchanged existing street furniture, essentially nothing outwardly changes for us.

These systems are becoming more the norm, and why not, if it is “for the greater good”, as the police agencies state?

However each day they are used, they ‘learn’ more about how we behave; our mass movements as herds in cities and as individual humans. How long will it be before the systems begin predicting ‘pre-crime’ in each one of us individually?

Currently, precrime software solutions are running in selected police departments across Great Britain, as well as in other countries.

Where does the analytics stop?

Here just two of many basic AI operating system concepts being used to promote the coming AI transition:

Will the machine scan parliament’s reams of legislation to analyse the particular crime that has been committed by an individual?

Then perhaps the machine can scan court case histories, to generate algorithms for ‘best conviction outcomes’, or advising police agencies specifically what crime has been committed and the optimum penalty.

Could the machine ultimately analyse whether we are guilty, or innocent?

Beyond its obvious astonishing computing ability and task applications, the real prospect of AI raises so many more questions regarding the devolution of many layers of human decision-making, that until now have been taken completely for granted. As AI increases in its its power, so too will the ethical considerations grow. One would hope so anyway.

These systems have the potential to create dizzying amounts of data sets about us. Being able to control our personal digital footprint is now a thing of the past as we move into an age of mass ubiquitous data harvesting.

***
21WIRE featured author Pippa King is a researcher and writer whose work focuses on digital privacy and the emerging issues regarding RFID, biometrics and surveillance. See more of Pippa’s work and research at State of Surveillance and Biometrics in Schools.

 

Source : 21st Century Wire

American Drone Operators Are Quitting in Record Numbers

 

The US drone war across much of the Greater Middle East and parts of Africa is in crisis, and not because civilians are dying or the target list for that war or the right to wage it just about anywhere on the planet are in question in Washington. Something far more basic is at stake: drone pilots are quitting in record numbers.

 

There are roughly 1,000 such drone pilots, known in the trade as “18Xs,” working for the US Air Force today. Another 180 pilots graduate annually from a training program that takes about a year to complete at Holloman and Randolph Air Force bases in, respectively, New Mexico and Texas. As it happens, in those same twelve months, about 240 trained pilots quit and the Air Force is at a loss to explain the phenomenon. (The better-known US Central Intelligence Agency drone assassination program is also flown by Air Force pilots loaned out for the covert missions.)

 

On January 4, 2015, the Daily Beast revealed an undated internal memo to Air Force Chief of Staff General Mark Welsh from General Herbert “Hawk” Carlisle stating that pilot “outflow increases will damage the readiness and combat capability of the MQ-1/9 [Predator and Reaper] enterprise for years to come” and added that he was “extremely concerned.” Eleven days later, the issue got top billing at a special high-level briefing on the state of the Air Force. Secretary of the Air Force Deborah Lee James joined Welsh to address the matter. “This is a force that is under significant stress—significant stress from what is an unrelenting pace of operations,” shetold the media.

 

In theory, drone pilots have a cushy life. Unlike soldiers on duty in “war zones,” they can continue to live with their families here in the United States. No muddy foxholes or sandstorm-swept desert barracks under threat of enemy attack for them. Instead, these new techno-warriors commute to worklike any office employees and sit in front of computer screens wielding joysticks, playing what most people would consider a glorified video game.

 

Read more

 

Source : The Nation

Can Revolution Produce Freedom in the Technological Age of Surveillance and Control?

Control through electronic surveillance is totally pervasive now… But can technology produce a strong revolution of freedom, independence and self sufficiency as well? I’m hopeful, but not convinced.

After reading up on the history of cybernetics, the ARPA (DARPA) Internet and television, I’m about ready to go Amish, or low-tech Amish.

The Technological Age & The End of Freedom?


The topic of the Unabomber came up again. It concerns a favorite passage of transhumanist Ray Kurzweil (included in his book Age of Spiritual Machines) and Bill Joy (which he wrote about), founder of the now acquired and defunct Sun Microsystems, in which Ted Kaczynski explains the “New Luddite Challenge” – essentially the question of what happens if computers take over completely, and if not, what happens at the hands of an elite who don’t need the masses for labor, or anything else.

Will people be simply exterminated? Will the population be gradually but sharply reduced through population control, eugenics, family planning and propaganda (as is actually happening now), or will the masses instead be treated as “pets” with cute hobbies and trivial pursuits, but no real meaning in society? The question remains, or could be a combination of all of the above.

In the face of mass unemployment and depopulation, is violent revolution justified?

For reasons I explain in the video above, likely not.

It is not clear who could be stopped with force that would in turn result in stopping, or slowing, the tyranny; the tyranny exists, but it is systematic and compartmentalized in the hands of thousands, and probably millions of people. There are countless corrupt and even evil officials, but stopping them will not stop the system. Moreover, violence has become a trivial event for media sensationalism and a tool in justifying greater police state powers, etc. Thus, violence is the wrong approach on many levels, including moral.

Gandhi made significant advances with non-violent non-cooperation, by the way –  yet was ironically inspired by the same works of Henry David Thoreau on civil disobedience and self sufficient living that inspired the violent revolutionary Kaczynski.

Liberty Through Revolution, and Liberty Through New Revelations

Meanwhile, there is the question of liberty, and the kind of freedom that America’s Founding Fathers pursued circa 1776.

Though other methods were attempting – the Tea Party protest, for instance – the revolution was ultimately fought through violent, guerilla warfare. One of Thomas Jefferson’s most famous quotes – as author of the Declaration of Independence and third president of the United States of America – is:

“The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.”

Years later, in his letters to John Adams, the second president and a one-time political enemy of Jefferson’s, Jefferson posed the idea that freedom could not so easily restored through violence, particularly if the public were unenlightened and uneducated in the ways of liberty and good self government.

Jefferson discusses the case of ancient Rome, where the usurped powers of Julius Caesar transformed the republic into a thoroughly corrupt dictatorship. Caesar was killed in a conspiracy by the Senate, led by Brutus. Ultimately, the Caesar dynasty remained in control of the empire anyway.

But Jefferson argues that even if Brutus had prevailed, or other Roman icons of freedom such as Cicero or Cato in power, it would have been nearly impossible to create good government in the climate of corruption, and the era of debased, demoralized masses who were uneducated in the virtues of self-government:

“How can a people who have struggled long years under oppression throw off their oppressors and establish a free society? The problems are immense, but their solution lies in the education and enlightenment of the people and the emergence of a spirit that will serve as a foundation for independence and self-government.”

“If Caesar had been as virtuous as he was daring and sagacious, what could he, even in the plenitude of his usurped power, have done to lead his fellow citizens into good government?… If their people indeed had been, like ourselves, enlightened, peaceable, and really free, the answer would be obvious. ‘Restore independence to all your foreign conquests, relieve Italy from the government of the rabble of Rome, consult it as a nation entitled to self-government, and do its will’.”

“But steeped in corruption, vice and venality, as the whole nation was,… what could even Cicero, Cato, Brutus have done, had it been referred to them to establish a good government for their country?… No government can continue good but under the control of the people; and their people were so demoralized and depraved as to be incapable of exercising a wholesome control.”

“These are the inculcations necessary to render the people a sure basis for the structure of order and good government. But this would have been an operation of a generation or two at least, within which period would have succeeded many Neros and Commoduses, who would have quashed the whole process. I confess, then, I can neither see what Cicero, Cato and Brutus, united and uncontrolled could have devised to lead their people into good government, nor how this enigma can be solved.” –Thomas Jefferson to John Adams, Dec. 10, 1819.

Thinking Our Way Into a Future That Needs Us

The take away here is the need for education – not just training, or common core standards to produce automatons and robot-like worker bees, but real education based upon enlightening and empowering information. If the future needs anything it is thinkers, not regurgitators, memorizers, replicators and drones – technology is undoubtedly already quite good at all that.

The Founders were great scholars of history and political theory and instituted limited government after careful consideration of all the things that went wrong with past systems, and what the best options were for encouraging freedom on several levels. They weren’t perfect, and in fact were quite flawed as individuals, but they did make a principled attempt.

Today, in the age of technology, computers and the Internet, freedom is losing to the control freaks, engaged in mass surveillance, mind control, economic centralization and oligarchical collectivism. Is there room for freedom in this technological society? Could a peaceful revolution succeed?

That depends upon what we can learn from technology’s inspiring possibilities – but also what we can learn from the many lessons of the past. Hint: most of these lessons are being wholesale ignored, as power for the state and corporate institutions concentrates and grows to levels well beyond dangerous, looming and eerie.

Or will we become just “pets,” as the musical act Porno for Pyros predicted?:

 

Source : Truthstream Media.

France Investigating Mysterious Drones Over Its Nuclear Plants | Common Dreams | Breaking News & Views for the Progressive Community

Unmanned aircraft have been documented flying over seven separate facilities operated by the state-owned nuclear power company EDF between Oct. 5 and Oct. 20. (Photo: emmett anderson/flickr/cc)

 

Authorities in France on Thursday announced the launch of an investigation into a series of mysterious drones that have been sighted flying over a number of the nation’s nuclear power facilities.

Unmanned aircraft have been documented flying over seven separate facilities operated by the state-owned nuclear power company EDF between Oct. 5 and Oct. 20, according to company officials.

Read more

« Older Entries