The Pentagon is building a ‘self-aware’ killer robot army fueled by social media
Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and Instagram
by Nafeez Ahmed
This exclusive is published by INSURGE INTELLIGENCE, a crowd-funded investigative journalism project for the global commons
An unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.
Despite official insistence that humans will retain a “meaningful” degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of “self-aware” interconnected robots to design and execute kill operations against robot-selected targets.
More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.
The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work’s denial that the DoD is planning to develop killer robots.
In a widely reported March conversation with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:
“We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete.”
But, he insisted, “We will not delegate lethal authority to a machine to make a decision,” except for “cyber or electronic warfare.”
Official US defence and NATO documents dissected by INSURGE intelligence reveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of “collateral damage.”
Behind public talks, a secret arms race
Efforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.
A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.
In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).
That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.
Most media outlets have reported the fact that so far, governments have not ruled out the long-term possibility that intelligent robots could be eventually authorized to make decisions to kill human targets autonomously.
But contrary to Robert Work’s claim, active research and development efforts to explore this possibility are already underway. The plans can be gleaned from several unclassified Pentagon documents in the public record that have gone unnoticed, until now.
Among them is a document released in February 2016 from the Pentagon’s Human Systems Community of Interest (HSCOI).
The document shows not only that the Pentagon is actively creating lethal autonomous weapon systems, but that a crucial component of the decision-making process for such robotic systems will include complex Big Data models, one of whose inputs will be public social media posts.
Robots that kill ‘like people’
The HSCOI is a little-known multi-agency research and development network seeded by the Office of the Secretary of Defense (OSD), which acts as a central hub for a huge plethora of science and technology work across US military and intelligence agencies.
The document is a 53-page presentation prepared by HSCOI chair, Dr. John Tangney, who is Director of the Office of Naval Research’s Human and Bioengineered Systems Division. Titled Human Systems Roadmap Review, the slides were presented at the NDIA’s Human Systems Conference in February.
The document says that one of the five “building blocks” of the Human Systems program is to “Network-enable, autonomous weapons hardened to operate in a future Cyber/EW [electronic warfare] Environment.” This would allow for “cooperative weapon concepts in communications-denied environments.”
But then the document goes further, identifying a “focus areas” for science and technology development as “Autonomous Weapons: Systems that can take action, when needed”, along with “Architectures for Autonomous Agents and Synthetic Teammates.”
The final objective is the establishment of “autonomous control of multiple unmanned systems for military operations.”
Such autonomous systems must be capable of selecting and engaging targets by themselves — with human “control” drastically minimized to affirming that the operation remains within the parameters of the Commander’s “intent.”
The document explicitly asserts that these new autonomous weapon systems should be able to respond to threats without human involvement, but in a way that simulates human behavior and cognition.
The DoD’s HSCOI program must “bridge the gap between high fidelity simulations of human cognition in laboratory tasks and complex, dynamic environments.”
Referring to the “Mechanisms of Cognitive Processing” of autonomous systems, the document highlights the need for:
“More robust, valid, and integrated mechanisms that enable constructive agents that truly think and act like people.”
The Pentagon’s ultimate goal is to develop “Autonomous control of multiple weapon systems with fewer personnel” as a “force multiplier.”
The new systems must display “highly reliable autonomous cooperative behavior” to allow “agile and robust mission effectiveness across a wide range of situations, and with the many ambiguities associated with the ‘fog of war.’”
Resurrecting the human terrain
The HSCOI consists of senior officials from the US Army, Navy, Marine Corps, Air Force, Defense Advanced Research Projects Agency (DARPA); and is overseen by the Assistant Secretary of Defense for Research & Engineering and the Assistant Secretary of Defense for Health Affairs.
HSCOI’s work goes well beyond simply creating autonomous weapons systems. An integral part of this is simultaneously advancing human-machine interfaces and predictive analytics.
The latter includes what a HSCOI brochure for the technology industry, ‘Challenges, Opportunities and Future Efforts’, describes as creating “models for socially-based threat prediction” as part of “human activity ISR.”
This is short-hand for intelligence, surveillance and reconnaissance of a population in an ‘area of interest’, by collecting and analyzing data on the behaviors, culture, social structure, networks, relationships, motivation, intent, vulnerabilities, and capabilities of a human group.
The idea, according to the brochure, is to bring together open source data from a wide spectrum, including social media sources, in a single analytical interface that can “display knowledge of beliefs, attitudes and norms that motivate in uncertain environments; use that knowledge to construct courses of action to achieve Commander’s intent and minimize unintended consequences; [and] construct models to allow accurate forecasts of predicted events.”
The Human Systems Roadmap Review document from February 2016 shows that this area of development is a legacy of the Pentagon’s controversial “human terrain” program.
The Human Terrain System (HTS) was a US Army Training and Doctrine Command (TRADOC) program established in 2006, which embedded social scientists in the field to augment counterinsurgency operations in theaters like Iraq and Afghanistan.
The idea was to use social scientists and cultural anthropologists to provide the US military actionable insight into local populations to facilitate operations — in other words, to weaponize social science.
The $725 million program was shut down in September 2014 in the wake of growing controversy over its sheer incompetence.
The HSCOI program that replaces it includes social sciences but the greater emphasis is now on combining them with predictive computational models based on Big Data. The brochure puts the projected budget for the new human systems project at $450 million.
The Pentagon’s Human Systems Roadmap Review demonstrates that far from being eliminated, the HTS paradigm has been upgraded as part of a wider multi-agency program that involves integrating Big Data analytics with human-machine interfaces, and ultimately autonomous weapon systems.
The new science of social media crystal ball gazing
The 2016 human systems roadmap explains that the Pentagon’s “vision” is to use “effective engagement with the dynamic human terrain to make better courses of action and predict human responses to our actions” based on “predictive analytics for multi-source data.”
In a slide entitled, ‘Exploiting Social Data, Dominating Human Terrain, Effective Engagement,’ the document provides further detail on the Pentagon’s goals:
“Effectively evaluate/engage social influence groups in the op-environment to understand and exploit support, threats, and vulnerabilities throughout the conflict space. Master the new information environment with capability to exploit new data sources rapidly.”
The Pentagon wants to draw on massive repositories of open source data that can support “predictive, autonomous analytics to forecast and mitigate human threats and events.”
This means not just developing “behavioral models that reveal sociocultural uncertainty and mission risk”, but creating “forecast models for novel threats and critical events with 48–72 hour timeframes”, and even establishing technology that will use such data to “provide real-time situation awareness.”
According to the document, “full spectrum social media analysis” is to play a huge role in this modeling, to support “I/W [irregular warfare], information operations, and strategic communications.”
This is broken down further into three core areas:
“Media predictive analytics; Content-based text and video retrieval; Social media exploitation for intel.”
The document refers to the use of social media data to forecast future threats and, on this basis, automatically develop recommendations for a “course of action” (CoA).
Under the title ‘Weak Signal Analysis & Social Network Analysis for Threat Forecasting’, the Pentagon highlights the need to:
“Develop real-time understanding of uncertain context with low-cost tools that are easy to train, reduce analyst workload, and inform COA [course of action] selection/analysis.”
In other words, the human input into the development of course of action “selection/analysis” must be increasingly reduced, and replaced with automated predictive analytical models that draw extensively on social media data.
This can even be used to inform soldiers of real-time threats using augmented reality during operations. The document refers to “Social Media Fusion to alert tactical edge Soldiers” and “Person of Interest recognition and associated relations.”
The idea is to identify potential targets — ‘persons of interest’ — and their networks, in real-time, using social media data as ‘intelligence.’
Meaningful human control without humans
Both the US and British governments are therefore rapidly attempting to redefine “human control” and “human intent” in the context of autonomous systems.
Among the problems that emerged at the UN meetings in April is the tendency to dilute the parameters that would allow describing an autonomous weapon system as being tied to “meaningful” human control.
A separate Pentagon document dated March 2016 — a set of presentation slides for that month’s IEEE Conference on Cognitive Methods in Situation Awareness & Decision Support — insists that DoD policy is to ensure that autonomous systems ultimately operate under human supervision:
“[The] main benefits of autonomous capabilities are to extend and complement human performance, not necessarily provide a direct replacement of humans.”
Unfortunately, there is a ‘but’.
The March document, Autonomous Horizons: System Autonomy in the Air Force, was authored by Dr. Greg Zacharias, Chief Scientist of the US Air Force. The IEEE conference where it was presented was sponsored by two leading government defense contractors, Lockheed Martin and United Technologies Corporation, among other patrons.
Further passages of the document are revealing:
“Autonomous decisions can lead to high-regret actions, especially in uncertain environments.”
In particular, the document observes:
“Some DoD activity, such as force application, will occur in complex, unpredictable, and contested environments. Risk is high.”
The solution, supposedly, is to design machines that basically think, learn and problem solve like humans. An autonomous AI system should “be congruent with the way humans parse the problem” and driven by “aiding/automation knowledge management processes along lines of the way humans solve problem [sic].”
A section titled ‘AFRL [Air Force Research Laboratory] Roadmap for Autonomy’ thus demonstrates how by 2020, the US Air Force envisages “Machine-Assisted Ops compressing the kill chain.” The bottom of the slide reads:
“Decisions at the Speed of Computing.”
This two-staged “kill chain” is broken down as follows: firstly, “Defensive system mgr [manager] IDs threats & recommends actions”; secondly, “Intelligence analytic system fuses INT [intelligence] data & cues analyst of threats.”
In this structure, a lethal autonomous weapon system draws on intelligence data to identify a threat, which an analyst simply “IDs”, before recommending “action.”
The analyst’s role here is simply to authorize the kill, but in reality the essential importance of human control — assessment of the integrity of the kill decision — has been relegated to the end of an entirely automated analytical process, as a mere perfunctionary obligation.
By 2030, the document sees human involvement in this process as being reduced even further to an absolute minimum. While a human operator may be kept “in the loop” (in the document’s words) the Pentagon looks forward to a fully autonomous system consisting of:
“Optimized platform operations delivering integrated ISR [intelligence, surveillance and reconnaissance] and weapon effects.”
The goal, in other words, is a single integrated lethal autonomous weapon system combining full spectrum analysis of all data sources with “weapon effects” — that is, target selection and execution.
The document goes to pains to layer this vision with a sense of human oversight being ever-present.
AI “system self-awareness”
Yet an even more blunt assertion of the Pentagon’s objective is laid out in a third document, a set of slides titled DoD Autonomy Roadmap presented exactly a year earlier at the NDIA’s Defense Tech Expo.
The document authored by Dr. Jon Bornstein, who leads the DoD’s Autonomy Community of Interest (ACOI), begins by framing its contents with the caveat: “Neither Warfighter nor machine is truly autonomous.”
Yet it goes on to call for machine agents to develop:
“Perception, reasoning, and intelligence allow[ing] for entities to have existence, intent, relationships, and understanding in the battle space relative to a mission.”
This will be the foundation for two types of weapon systems: “Human/ Autonomous System Interaction and Collaboration (HASIC)” and “Scalable Teaming of Autonomous Systems (STAS).”
In the near term, machine agents will be able “to evolve behaviors over time based on a complex and ever-changing knowledge base of the battle space… in the context of mission, background knowledge, intent, and sensor information.”
However, it is the Pentagon’s “far term” vision for machine agents as “self-aware” systems that is particularly disturbing:
•Ontologies adjusted through common-sense knowledge via intuition.
•Learning approaches based on self-exploration and social interactions.
•Behavioral stability through self-modification.
It is in this context of the “self-awareness” of an autonomous weapon system that the document clarifies the need for the system to autonomously develop forward decisions for action, namely:
“Autonomous systems that appropriately use internal model-based/deliberative planning approaches and sensing/perception driven actions/control.”
The Pentagon specifically hopes to create what it calls “trusted autonomous systems”, that is, machine agents whose behavior and reasoning can be fully understood, and therefore “trusted” by humans:
“Collaboration means there must be an understanding of and confidence in behaviors and decision making across a range of conditions. Agent transparency enables the human to understand what the agent is doing and why.”
Once again, this is to facilitate a process by which humans are increasingly removed from the nitty gritty of operations.
In the “Mid Term”, there will be “Improved methods for sharing of authority” between humans and machines. In the “Far Term”, this will have evolved to a machine system functioning autonomously on the basis of “Awareness of ‘commanders intent’” and the “use of indirect feedback mechanisms.”
This will finally create the capacity to deploy “Scalable Teaming of Autonomous Systems (STAS)”, free of overt human direction, in which multiple machine agents display “shared perception, intent and execution.”
Teams of autonomous weapon systems will display “Robust self-organization, adaptation, and collaboration”; “Dynamic adaption, ability to self-organize and dynamically restructure”; and “Agent-to-agent collaboration.”
Notice the lack of human collaboration.
The “far term” vision for such “self-aware” autonomous weapon systems is not, as Robert Work claimed, limited to cyber or electronic warfare, but will include:
These operations might even take place in tight urban environments — “in close proximity to other manned & unmanned systems including crowded military & civilian areas.”
The document admits, though, that the Pentagon’s major challenge is to mitigate against unpredictable environments and emergent behavior.
Autonomous systems are “difficult to assure correct behavior in a countless number of environmental conditions” and are “difficult to sufficiently capture and understand all intended and unintended consequences.”
Terminator teams, led by humans
The Autonomy roadmap document clearly confirms that the Pentagon’s final objective is to delegate the bulk of military operations to autonomous machines, capable of inflicting “Collective Defeat of Hard and Deeply Buried Targets.”
One type of machine agent is the “Autonomous Squad Member (Army)”, which “Integrates machine semantic understanding, reasoning, and perception into a ground robotic system”, and displays:
“Early implementation of a goal reasoning model, Goal-Directed Autonomy (GDA) to provide the robot the ability to self-select new goals when it encounters an unanticipated situation.”
Human team members in the squad must be able “to understand an intelligent agent’s intent, performance, future plans and reasoning processes.”
Another type is described under the header, ‘Autonomy for Air Combat Missions Team (AF).’
Such an autonomous air team, the document envisages, “Develops goal-directed reasoning, machine learning and operator interaction techniques to enable management of multiple, team UAVs.” This will achieve:
“Autonomous decision and team learning enable the TBM [Tactical Battle Manager] to maximize team effectiveness and survivability.”
TBM refers directly to a battle management autonomy software for unmanned aircraft.
The Pentagon still, of course, wants to ensure that there remains a human manual override, which the document describes as enabling a human supervisor “to ‘call a play’ or manually control the system.”
Targeting evil antiwar bloggers
Yet the biggest challenge, nowhere acknowledged in any of the documents, is ensuring that automated AI target selection actually selects real threats, rather than generating or pursuing false positives.
According to the Human Systems roadmap document, the Pentagon has already demonstrated extensive AI analytical capabilities in real-time social media analysis, through a NATO live exercise last year.
During the exercise, Trident Juncture — NATO’s largest exercise in a decade — US military personnel “curated over 2M [million] relevant tweets, including information attacks (trolling) and other conflicts in the information space, including 6 months of baseline analysis.” They also “curated and analyzed over 20K [i.e. 20,000] tweets and 700 Instagrams during the exercise.”
The Pentagon document thus emphasizes that the US Army and Navy can now already “provide real-time situation awareness and automated analytics of social media sources with low manning, at affordable cost”, so that military leaders can “rapidly see whole patterns of data flow and critical pieces of data” and therefore “discern actionable information readily.”
The primary contributor to the Trident Juncture social media analysis for NATO, which occurred over two weeks from late October to early November 2015, was a team led by information scientist Professor Nitin Agarwal of the University of Arkansas, Little Rock.
Agarwal’s project was funded by the US Office of Naval Research, Air Force Research Laboratory and Army Research Office, and conducted in collaboration with NATO’s Allied Joint Force Command and NATO Strategic Communications Center of Excellence.
Slides from a conference presentation about the research show that the NATO-backed project attempted to identify a hostile blog network during the exercise containing “anti-NATO and anti-US propaganda.”
Among the top seven blogs identified as key nodes for anti-NATO internet traffic were websites run by Andreas Speck, an antiwar activist; War Resisters International (WRI); and Egyptian democracy campaigner Maikel Nabil Sanad — along with some Spanish language anti-militarism sites.
Andreas Speck is a former staffer at WRI, which is an international network of pacifist NGOs with offices and members in the UK, Western Europe and the US. One of its funders is the Joseph Rowntree Charitable Trust.
The WRI is fundamentally committed to nonviolence, and campaigns against war and militarism in all forms.
Most of the blogs identified by Agarwal’s NATO project are affiliated to the WRI, including for instance nomilservice.com, WRI’s Egyptian affiliate founded by Maikel Nabil, which campaigns against compulsory military service in Egypt. Nabil was nominated for the Nobel Peace Prize and even supported by the White House for his conscientious objection to Egyptian military atrocities.
The NATO project urges:
“These 7 blogs need to be further monitored.”
The project was touted by Agarwal as a great success: it managed to extract 635 identity markers through metadata from the blog network, including 65 email addresses, 3 “persons”, and 67 phone numbers.
Agarwal’s conference slides list three Pentagon-funded tools that his team created for this sort of social media analysis: Blogtracker, Scraawl, and Focal Structures Analysis.
Flagging up an Egyptian democracy activist like Maikel Nabil as a hostile entity promoting anti-NATO and anti-US propaganda demonstrates that when such automated AI tools are applied to war theatres in complex environments (think Pakistan, Afghanistan and Yemen), the potential to identify individuals or groups critical of US policy as terrorism threats is all too real.
This case demonstrates how deeply flawed the Pentagon’s automation ambitions really are. Even with the final input of independent human expert analysts, entirely peaceful pro-democracy campaigners who oppose war are relegated by NATO to the status of potential national security threats requiring further surveillance.
Compressing the kill chain
It’s often assumed that DoD Directive 3000.09 issued in 2012, ‘Autonomy in Weapon Systems’, limits kill decisions to human operators under the following stipulation in clause 4:
“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
After several paragraphs underscoring the necessity of target selection and execution being undertaken under the oversight of a human operator, the Directive goes on to open up the possibility of developing autonomous weapon systems without any human oversight, albeit with the specific approval of senior Pentagon officials:
“Autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets… Autonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside the policies in subparagraphs 4.c.(1) through 4.c.(3) must be approved by the Under Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the CJCS before formal development and again before fielding.”
Rather than prohibiting the development of lethal autonomous weapon systems, the directive simply consolidates all such developments under the explicit authorization of the Pentagon’s top technology chiefs.
Worse, the directive expires on 21st November 2022 — which is around the time such technology is expected to become operational.
Indeed, later that year, Lieutenant Colonel Jeffrey S. Thurnher, a US Army lawyer at the US Naval War College’s International Law Department, published a position paper in the National Defense University publication, Joint Force Quarterly.
He recommended that there were no substantive legal or ethical obstacles to developing fully autonomous killer robots — as long as such systems are designed in such a way as to maintain a semblance of human oversight through “appropriate control measures.”
In the conclusions to his paper, titled No One At The Controls: Legal Implications of Fully Autonomous Targeting, Thurnher wrote:
“LARs [lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed. Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the United States should fully commit itself to harnessing the potential of fully autonomous targeting. The feared legal concerns do not appear to be an impediment to the development or deployment of LARs. Thus, operational commanders should take the lead in making this emerging technology a true force multiplier for the joint force.”
The NATO document, which aims to provide expert legal advice to government policymakers, sets out a position in which the deployment of autonomous weapon systems for lethal combat — in particular the delegation of targeting and kill decisions to machine agents — is viewed as being perfectly legitimate in principle.
It is the responsibility of specific states, the document concludes, to ensure that autonomous systems operate in compliance with international law in practice — a caveat that also applies for the use of autonomous systems for law-enforcement and self-defence.
In the future, though, the NATO document points to the development of autonomous systems that can “reliably determine when foreseen but unintentional harm to civilians is ethically permissible.”
Acknowledging that currently only humans are able to make a “judgement about the ethical permissibility of foreseen but unintentional harm to civilians (collateral damage)”, the NATO policy document urges states developing autonomous weapon systems to ensure that eventually they “are able to integrate with collateral damage estimation methodologies” so as to delegate targeting and kill decisions accordingly.
The NATO position is particularly extraordinary given that international law — such as the Geneva Conventions — defines foreseen deaths of civilians caused by a military action as intentional, precisely because they were foreseen yet actioned anyway.
“… making the civilian population or individual civilians, not taking a direct part in hostilities, the object of attack; launching an attack in the knowledge that such attack will cause incidental loss of civilian life, injury to civilians or damage to civilian objects which would be clearly excessive in relation to the concrete and direct military advantage anticipated;… making civilian objects, that is, objects that are not military objectives, the object of attack.”
“… launching an indiscriminate attack resulting in loss of life or injury to civilians or damage to civilian objects; launching an attack against works or installations containing dangerous forces in the knowledge that such attack will cause excessive incidental loss of civilian life, injury to civilians or damage to civilian objects.”
In other words, NATO’s official policy guidance on autonomous weapon systems sanitizes the potential for automated war crimes. The document actually encourages states to eventually develop autonomous weapons capable of inflicting “foreseen but unintentional” harm to civilians in the name of securing a ‘legitimate’ military advantage.
Yet the NATO document does not stop there. It even goes so far as to argue that policymakers considering the development of autonomous weapon systems for lethal combat should reflect on the possibility that delegating target and kill decisions to machine agents would minimize civilian casualties.
A new report by Paul Scharre, who led the Pentagon working group that drafted DoD Directive 3000.09 and now heads up the future warfare program at the Center for New American Security in Washington DC, does not mince words about the potentially “catastrophic” risks of relying on autonomous weapon systems.
“With an autonomous weapon,” he writes, “the damage potential before a human controller is able to intervene could be far greater…
“In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other autonomous weapons of the same type, a military could face the disturbing prospect of large numbers of autonomous weapons failing simultaneously, with potentially catastrophic consequences.”
Scharre points out that “autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces,” due to any number of potential reasons, including “hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors.”
Noting that in the software industry, for every 1,000 lines of code, there are between 15 and 50 errors, Scharre points out that such marginal, routine errors could easily accumulate to create unexpected results that could be missed even by the most stringent testing and validation methods.
The more complex the system, the more difficult it will be to verify and track the system’s behavior under all possible conditions: “… the number of potential interactions within the system and with its environment is simply too large.”
The documents discussed here show that the Pentagon is going to pains to develop ways to mitigate these risks.
But as Scharre concludes, “these risks cannot be eliminated entirely. Complex tightly coupled systems are inherently vulnerable to ‘normal accidents.’ The risk of accidents can be reduced, but never can be entirely eliminated.”
As the trajectory toward AI autonomy and complexity accelerates, so does the risk that autonomous weapon systems will, eventually, wreak havoc.
Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is a weekly columnist for Middle East Eye.
He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work, and was twice selected in the Evening Standard’s top 1,000 most globally influential Londoners, in 2014 and 2015.
Nafeez has also written and reported for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, The Ecologist, Alternet, Counterpunch, Truthout, among others.
He is a Visiting Research Fellow at the Faculty of Science and Technology at Anglia Ruskin University, where he is researching the link between global systemic crises and civil unrest for Springer Energy Briefs.
This story is being released for free in the public interest, and was enabled by crowdfunding. I’d like to thank my amazing community of patrons for their support, which gave me the opportunity to work on this story. Please support independent, investigative journalism for the global commons via Patreon.com, where you can donate as much or as little as you like.
Many people may mistakenly believe that the future is something that others, like big companies or governments usher in and that they themselves play either a minor active role, or one that is entirely passive. In reality, there are already groups of regular people just like you or me around the world literally building the future of their communities themselves with their own two hands and in collaboration with their friends, family, neighbors, and through the power of the Internet, with like-minded individuals around the world.
Above image: Instead of some planned community built by government or developers, we can add a layer of opensource technology over our existing communities, on our rooftops, in our offices, and at existing public spaces or markets. In addition to this added layer of physical technology, a little change in our mindset will go a long way in transforming our communities.
Because of the exponential progress of technology, the impact of small, organized projects is increasing as well. Think about 3D printing and how for many years it remained firmly in the realm of large businesses for use in prototyping. It was only when small groups of enthusiastic hobbyists around the world began working on cheaper and more accessible versions of these machines that they ended up on the desktops of regular people around the world, changing the way we look at manufacturing.
Similar advances in energy production, biotechnology, agriculture, IT, and manufacturing technology are likewise empowering people on a very distributed and local level.
What we see emerging is a collection of local “institutions” giving people direct access to the means to change their communities for the better, bypassing more abstract and less efficient means of effecting change like voting or protesting.
Political processes, however, will become more relevant and practical when people actually have resources and direct hands-on experience in the matters of running their communities. Demanding more of those that represent you will have more meaning when those demands are coupled with practical solutions and enumerated plans of action.
3D printing has come a long way since the first RepRap desktop printers and their derivatives which includes MakerBot’s first designs. 3D printing has gone from an obscure obsession among hobbyists to a mainstream phenomenon that is transforming the way we look at manufacturing.
Let’s explore these “institutions” and see what is possible, what is already being done, and how you can get involved today in physically shaping your community’s future starting today.
A makerspace is exactly what it sounds like: a space where you make things. However, it is often associated with computer controlled personal manufacturing technology like 3D printers, CNC mills, and laser and/or waterjet cutters. There is also a significant amount of electronic prototyping equipment on hand including opensource development boards like the Arduino, which allows virtually anyone to control physical objects in the real world.
A well-equipped makerspace in Singapore.
Makerspaces also generally include a small core team with skills ranging from design and engineering to software development. These teams usually are eager to bring in new people and introduce them to the tools, techniques, and technology they are so passionate about.
Makerspaces already exist around the world and it is very likely that no matter where you live, you have one relatively nearby. Makerspaces hold workshops for both absolute beginners and experienced tech enthusiasts.
Makerspaces hold frequent workshops to share their knowledge and enthusiasm with others, often absolute beginners. There is a good chance your local makerspace has workshops available. Some are even free.
You can prototype virtually anything in a makerspace, making it the perfect place to go when you have a problem and want to develop a practical, tangible solution to solve it. Everything from an opensource solar charger to a new kind of 3D printer could be (and has been) made at a makerspace, making it the perfect nexus for our local community and the variety of other local institutions that may crop up there.
A combination of rediscovered traditional practices combined with modern technology makes local food production both practical and profitable. Community gardens are not uncommon, and there is a growing interest around the world, particularly in urban areas to utilize the sun-soaked rooftops to grow food with which to consume or distribute to local restaurants and markets.
US-based Growing Power proves what communities can accomplish by working together. They have proven that community urban agriculture can be both practical and profitable, with their project becoming not just a local business, but a resource for the community as well.
The Comcrop project in Singapore provides a particularly impressive example, having been in operation for several years now, serving not only as a source of locally produced food for restaurants and grocery stores, but also as a community resources teaching all who are interested how to raise crops in a dense urban environment like that found in Singapore.
Singapore’s Comcrop project has proven that even in the densest of urban environments, agriculture can be carried out by communities for profit, fun, and education. Collaboration with local makerspaces could further enhance their operation’s efficiency.
Another impressive example of local agriculture is US-based Growing Power where greenhouses, vermiculture, and aquaponics are all combined to generate an immense amount of food feeding into a local distribution network the project has diligently developed over the years.
Local food production and distribution is steadily expanding around the world as the concept of farmers’ markets spread and entire communities of both producers and consumers connect in a much more relevant, transparent, and beneficial manner than possible under the existing mass consumerist paradigm of big-ag and big-box stores.
Applying the resources found at a makerspace to local agriculture gives us the ability to take organic agriculture and increase its efficiency through automation. That’s the idea behind ProgressTH’s own automated agriculture project, and others like it. There is no reason why local communities cannot have locally produced organic food, and utilize technology to bring efficiency on par with that claimed by large-scale operations.
Modern civilization does not function without electrical power, something we are reminded of every time the power goes out during a storm. Currently, most of the world’s power still comes from centralized national grids and large power plants.
Dropping prices and increasing capabilities is making solar power an attractive means to help decentralize and localize power production.
However, the march forward of technology is finally making the means of producing power locally more accessible to more people around the world. An extreme example of a localized, distributed power grid can be found in the remote hills of Thailand’s Phetchaburi province where the national power grid never quite made it. A local team created a tech-center of sorts where villagers were trained in the designing and installation of solar power systems, bringing the village light and power for irrigation house-by-house. The villagers have created a sort of collaborative network where everyone helps out when expanding the network’s capabilities.
The Pedang Project in Phetchaburi, Thailand has literally brought power to a tiny remote village isolated from the national power grid. Now it is taking its experience and sharing it with others around the country to replicate their success.
This network also trains people from all over the country to replicate their success elsewhere, even in areas where the national grid does reach, but where independence in power production is still sought.
This includes a school halfway across the country that is entirely solar powered which has incorporated alternative energy in the curriculum giving students practical experience and skills to use once they graduate.
A school in Thailand’s northeast has also become a center for alternative energy and organic agriculture, all of which is combined with more traditional curriculum. Students grow their own food and help maintain the solar power system that powers the school during studying hours.
Imagine every community, rural or urban, developing their own alternative power solutions themselves, managing both the physical infrastructure and the knowledge required to maintain it. It doesn’t necessarily need to replace current power production, but it could augment it until technology makes it possible for complete, localized and distributed power production.
This healthcare professional is working on a prototype in a makerspace placed literally within the hospital he works at.
MIT’s MakerNurse program is one example of this. Bangkok-based QSNICH (Queen Sirikit National Institute of Child Health) is another example. Decentralizing and opening up the development of biomedical technology is key to lowering its prices. While subsidizing healthcare now is necessary to ensure people who cannot afford treatment can still get it, in the future, healthcare will be so cheap such subsidies will have less impact on the quantity and quality of care.
Biomedical technology, the hardware you see in hospitals is one thing, the actual pharmaceuticals and therapies administered to patients is another. DIYbio (do-it-yourself biology) is a growing community much like the maker movement that seeks to open up biotechnology to a wider audience by lowering the cost of equipment and opening up knowledge by making their work collaborative, transparent and, most importantly, opensource.
3D-printed prototypes developed for healthcare professionals at a Bangkok-based children’s hospital by ProgressTH’s in-house makerspace.
And, believe it or not, cutting-edge technology like gene therapy which has actually already cured cancer in terminal leukemia patients and shown promise in clinical trials for everything from heart disease to blindness and deafness is being approached by the DIYbio community. For now, it borders between something like a community lab and a small start-up company, as is the case with Bioviva or Andrew Hessel’s Pink Army Cooperative. In the future, we can see current collaborations between makerspaces and healthcare professionals extending and evolving between biotech researchers and local community labs.
Liz Parrish of Bioviva is blurring the lines between traditional R&D and accelerated and smaller-scale progress in developing therapies for patients.
Again, the makerspace allows for the prototyping and development of much of the opensource biotech equipment already being produced and making headlines around the world.
Microfactories are localized manufacturing facilities that specialize in small-run production. Say that you create a brilliant prototype at your local makerspace, but need to make only 100-200 of them at a time. Traditional factories because of current economies of scale usually will not help you, at least not for a reasonable price. Microfactories can fill the void between makerspace prototypes and mass production.
Microfactories already exist, but require large capital investments for the amount of machinery required to efficiently carry out small-run production. Advances in personal manufacturing will continue to lower these barriers, and many makerspaces around the world are already working to bridge the gap between prototyping and small-run production.
In the future, microfactories may evolve into an entire network of distributed manufacturing making mass production obsolete. This is, again, dependent on the progress of manufacturing technology. When computer-controlled manufacturing processes like CNC mills and 3D printers can handle more materials, faster, and more efficiently, small-run production will become more and more practical.
An Arduino-compatible board made in Thailand for the Thai market beats out Chinese-made boards both in quality and even price. This is part of a trend toward the gradual reduction of manufacturing “hubs” and lead toward a more distributed and local means of manufacturing.
This is just the leading edge of a shifting paradigm toward fully distributed manufacturing. Again, makerspaces will play a crucial role, providing educational and training resources for the local community to learn how to design and develop ideas into prototypes and then pass them on to local microfactories for production and distribution.
Local Motors is pioneering the concept of distributed car manufacturing. Microfactories in the future may make everything from handheld devices to something as big as a car, on demand or in small runs that will challenge or entirely shift our current globalized manufacturing paradigm.
Just how far could this go? Looking at US-based Local Motors, who is attempting to create (which much success) a distributed auto-manufacturing network, it can probably end up encompassing nearly everything we use on a daily basis short of aerospace and architecture. With 3D-printed buildings cropping up around the world, each community might have their own cooperative-owned system for that as well.
Maybe now you can see how communities possessing these key institutions could begin to tackle their problems head on, practically, with tangible solutions instead of waiting for others, far away, to address them for them. By doing so, people will become more directly involved in their own destiny, possessing both skills and experience in running and improving their communities, giving them better insight and discretion when engaging in political processes beyond their community.
And because of the talent that is attracted to and produced within makerspaces, the means of creating, for example, parallel mesh communication networks or water production and distribution systems, could exist as well. Virtually everything in one’s community could end up a product of local talent, entrepreneurial vision, and innovation.
But it is important to remind potential critics that this is not a process toward tens of thousands of isolated communities scattered across the planet. Like makerspaces today, while each one possesses its own tools and talent, they are all connected and collaborating together with other spaces around the world taking and adapting great ideas when needed, while sharing their own success with others through an opensource culture.
The distributed nature of these economic, manufacturing, healthcare, agricultural, and infrastructure networks also means more resilience, especially because they are collaborative on a much larger scale. There is no single power plant or agricultural region to “wipe out” to plunge a huge population dependent on either into crisis. Disasters and crises can be absorbed and compensated for by neighboring communities unaffected. The loss of power in one community will not affect another if both are self-sufficient in power production. However, temporary assistance would be possible for one community to lend another.
“Standards,” if you will, would still exist, honed not through legality and policy, but through actual performance data, user feedback, and reputation. And because this process by its very nature is a flexible one, unforeseen opportunities and threats could be capitalized on or met as needed.
How Can You Get Involved Today?
Yes, you can get involved today! All you have to do is find your closest makerspace (or here) and drop by to check it out. You can also begin teaching yourself by taking advantage of the huge amount of fully free resources online covering everything from the basics of 3D printing, to opensource electronics, to local organic agriculture, to DIYbio. Let your favorite Internet search engine be your guide and find the resources you find most useful to your own style of learning. On YouTube alone, by simply typing any area of interest in, you can usually find dozens of tutorials and presentations.
A makerspace in Chiang Mai, Thailand. Just a few years ago, there were no makerspaces at all in Thailand, now there are clubs and spaces from north to south and a growing community connected through collaboration and enthusiasm about the power of hands-on innovations and solutions.
Get your friends involved; and if none are interested, it is easy to make new friends who are interested in this shifting paradigm, since “collaboration” is in fact at the very heart of it. If you are in Bangkok, feel free to contact us for workshops that ProgressTH and its many friends have on offer, some of which are even free.
The most important thing to remember is, no matter how small your progress is day to day, it will all add up in a year’s time to something that will surely surprise you. The only sure way to fail is by doing nothing — after all, zero times all the days in the year still only equals zero. You do not need to be a trained engineer or professional designer, biologist, or experienced farmer to begin building up your local community. Many of the most prominent names contributing to this current paradigm are college dropouts, or entirely self-taught. You will surely run into professionals, however, and you will learn a lot from them.
It is a truly exciting journey, and one that will have direct benefit to both yourself and your community. You can do it part-time in addition to your existing job. And many have ended up making a living full-time by contributing. We have, and will continue covering this unfolding movement, and we would love to cover your contributions… so start contributing!
According to documents published by WikiLeaks this week, the NSA spied on multiple world leaders on behalf of oil companies. The documents revealed that the NSA spied on the private meetings of world leaders such as UN chief Ban Ki-moon, German Chancellor Angela Merkel, and other European politicians.
The discussion between Ban Ki-moon and Merkel was involving environmental pollution and the impact that fossil fuels had on the environment, and according to the WikiLeaks release, the NSA was listening in for the purpose of collecting information for oil companies.
“Today we showed that UN Secretary General Ban Ki-moon’s private meetings over how to save the planet from climate change were bugged by a country intent on protecting its largest oil companies,” WikiLeaks founder Julian Assange said in a statement.
“We previously published Hillary Clinton’s orders that US diplomats were to steal the Secretary General’s DNA. The US government has signed agreements with the UN that it will not engage in such conduct against the UN — let alone its Secretary General. It will be interesting to see the UN’s reaction, because if the Secretary General can be targeted without consequence, then everyone from world leader to street sweeper is at risk,” he added.
RELEASE: TOP SECRET NSA recording of private meeting between UN’s Ban KiMoon and Germany’s Angela Merkel https://t.co/RwOVWzozrQ@UN
WikiLeaks publishes highly classified documents showing that the NSA bugged meetings between UN Secretary General Ban Ki-Moon’s and German Chancellor Angela Merkel, between Israel prime minister Netanyahu and Italian prime minister Berlusconi, between key EU and Japanese trade ministers discussing their secret trade red-lines at WTO negotiations, as well as details of a private meeting between then French president Nicolas Sarkozy, Merkel and Berlusconi.
The documents also reveal the content of the meetings from Ban Ki Moon’s strategising with Merkel over climate change, to Netanyahu’s begging Berlusconi to help him deal with Obama, to Sarkozy telling Berlusconi that the Italian banking system would soon “pop like a cork”.
These new revelations raise questions about the influence that elements within the oil industry have on secretive government agencies like the NSA. It was not revealed which company was responsible, nor if they paid any type of money, all of these details were obviously kept “off the record.”
However, this would not be the first time that the US government’s military industrial complex has taken action on behalf of US corporations. In fact, it has been happening on a daily basis, across the planet, for many decades as documented in the book Confessions of an Economic Hitman, where a former US consultant chronicled his experience toppling democratically elected leaders that were uncooperative with US corporate interests.
John Vibes is an author and researcher who organizes a number of large events including the Free Your Mind Conference. He also has a publishing company where he offers a censorship free platform for both fiction and non-fiction writers. You can contact him and stay connected to his work at his Facebook page. You can purchase his books, or get your own book published at his website www.JohnVibes.com.
Hacktivist collective Anonymous claims to have dumped online a huge database belonging to Turkey’s General Directorate of Security (EGM) in response to “various abuses” by the Turkish government in recent months.
The person who uploaded the database Monday said he received it from a hacker who had “persistent access to various parts of the Turkish government infrastructure for the past two years.”
The compressed file is expected to weigh in at some 2.8GB, and the uncompressed version at around 17.8GB.
The files were released “in light of various government abuses in the past few months” in Turkey, as the activist “decided to take action against corruption,” the activists added.
EGM is the civilian police force in Turkey, tasked with preventing crime, keeping the peace and protecting citizens and their property.
Turkey has recently seen a clampdown on media freedom, with several journalists facing treason charges after unmasking dubious practices by President Recep Tayyip Erdogan and his government.
The Turkish authorities are also carrying out a so-called anti-terrorist operation against Kurdish militias in the southeast of the country, killing at least 150 civilians and putting over 200,000 lives at risk due to a strict curfew, according to Amnesty International.
Erdogan’s government has been shelling the Kurds fighting Islamic State (IS or Daesh, formerly ISIS/ISIL) in Syria and earlier was slammed for sending troops into Iraqi territory.
A recent successful hacking of three ISIS supporters’ Twitter accounts has revealed that the source of these accounts is not located in Syria or Iraq but in the UK and Saudi Arabia.
According to a report by the UK Mirror, a group of four hackers known as VandaSec hacked the ISIS accounts and linked them back to the Department of Work and Pensions in the UK. Indeed, according to the hackers, the accounts are being run from Internet addresses that can be traced back to the DWP.
The Mirror reports that VandaSec showed them the IP addresses used by a group of three separate jihadists (digital ones at least) to access their Twitter accounts. What appeared at first to suggest that the IP addresses were based in Saudi Arabia soon revealed that the addresses linked back to the DWP in the UK.
“VandaSec’s work has sparked wild rumours suggesting someone inside the DWP is running ISIS-supporting accounts, or they were created by intelligence services as a honeypot to trap wannabe jihadis,” Jasper Hamill writes.
The Mirror then claims to have traced the IP addresses shown to them by VandaSec and allegedly found that the addresses pointed to “a series of unpublicized transactions between Britain and Saudi Arabia.”
The Mirror reports it has learned that the British government sold a significant number of IP addresses to two Saudi Arabian firms and, after the sale in October, the IP addresses were being used to spread ISIS propaganda. These IP addresses were apparently not the only ones sold to Saudi Arabia in October, however, but little information is available as to how many others were sold and what they are being used for.
But while the sale of the IP addresses might shift the blame from the UK government on the Saudi Arabian government, the question still remains as to how and why the addresses can be traced back to the DWP.
Some argue that the addresses can be traced back to the UK because the address records had not been fully updated yet. The UK Cabinet Office has officially admitted to selling IP addresses to a number of Saudi Telecom and Mobile Telecommunications Company (based in Saudi Arabia) earlier this year. Still, the question of why the Twitter accounts bear a bit more explanation, at least officially, as to why they can be traced back to Britain.
Regardless, the fact that these Twitter accounts are active in Saudi Arabia and are operating with scant regard for any possibility of being discovered and punished is telling enough. Also, the fact that Twitter, generally cooperative with national governments, has yet to eliminate them should serve as an example to many that both the West and certainly the GCC are entirely uninterested in truly eliminating ISIS as a terrorist organization. Instead, the goal is clearly to continue to use it as a proxy army against non-cooperative governments and non-compliant populations the world over.
So what we have here is ever-increasing overreach by the government; a government who continually erode our civil liberties and right to privacy in the name of security, while applying such minimal measures to their own practices that they sold their own IP addresses to ISIS.
While Americans and the Western world are being relentlessly hammered with propaganda describing ISIS as the greatest threat to humanity justifying polices and surveillance states, it must be asked how members of the most notorious terrorist organization in the world can freely tweet and propagandize using social media linked to the UK.
Update: CISA is now the law: OBAMA SIGNS SPENDING, TAX BILL THAT REPEALS OIL EXPORT BAN
* * *
Back in 2014, civil liberties and privacy advocates were up in arms when the government tried to quietly push through the Cybersecurity Information Sharing Act, or CISA, a law which would allow federal agencies – including the NSA – to share cybersecurity, and really any information with private corporations “notwithstanding any other provision of law.” The most vocal complaint involved CISA’s information-sharing channel, which was ostensibly created for responding quickly to hacks and breaches, and which provided a loophole in privacy laws that enabled intelligence and law enforcement surveillance without a warrant.
Ironically, in its earlier version, CISA had drawn the opposition of tech firms including Apple, Twitter, Reddit, as well as the Business Software Alliance, the Computer and Communications Industry Association and many others including countless politicians and, most amusingly, the White House itself.
In April, a coalition of 55 civil liberties groups and security experts signed onto an open letter opposing it. In July, the Department of Homeland Security itself warned that the bill could overwhelm the agency with data of “dubious value” at the same time as it “sweep[s] away privacy protections.” Most notably, the biggest aggregator of online private content, Facebook, vehemently opposed the legislation however a month ago it was “surprisingly” revealed that Zuckerberg had been quietly on the side of the NSA all along as we reported in “Facebook Caught Secretly Lobbying For Privacy-Destroying “Cyber-Security” Bill.”
Following the blitz response, the push to pass CISA was tabled following a White House threat to veto similar legislation. Then, quietly, CISA reemerged after the same White House mysteriously flip-flopped, expressed its support for precisely the same bill in August.
And then the masks fell off, when it became obvious that not only are corporations eager to pass CISA despite their previous outcry, but that they have both the White House and Congress in their pocket.
As Wired reminds us, when the Senate passed the Cybersecurity Information Sharing Act by a vote of 74 to 21 in October, privacy advocates were again “aghast” that the key portions of the law were left intact which they said make it more amenable to surveillance than actual security, claiming that Congress has quietly stripped out “even more of its remaining privacy protections.”
“They took a bad bill, and they made it worse,” says Robyn Greene, policy counsel for the Open Technology Institute.
But while Congress was preparing a second assault on privacy, it needed a Trojan Horse with which to enact the proposed legislation into law without the public having the ability to reject it.
It found just that by attaching it to the Omnibus $1.1 trillion Spending Bill, which passed the House early this morning, passed the Senate moments ago and will be signed into law by the president in the coming hours.
In a late-night session of Congress, House Speaker Paul Ryan announced a new version of the “omnibus” bill, a massive piece of legislation that deals with much of the federal government’s funding. It now includes a version of CISA as well. Lumping CISA in with the omnibus bill further reduces any chance for debate over its surveillance-friendly provisions, or a White House veto. And the latest version actually chips away even further at the remaining personal information protections that privacy advocates had fought for in the version of the bill that passed the Senate.
It gets: it appears that while CISA was on hiatus, US lawmakers – working under the direction of corporations adnt the NSA – were seeking to weaponize the revised legislation, and as Wired says, the latest version of the bill appended to the omnibus legislation seems to exacerbate the problem of personal information protections.
It creates the ability for the president to set up “portals” for agencies like the FBI and the Office of the Director of National Intelligence, so that companies hand information directly to law enforcement and intelligence agencies instead of to the Department of Homeland Security. And it also changes when information shared for cybersecurity reasons can be used for law enforcement investigations. The earlier bill had only allowed that backchannel use of the data for law enforcement in cases of “imminent threats,” while the new bill requires just a “specific threat,” potentially allowing the search of the data for any specific terms regardless of timeliness.
Some, like Senator Ron Wyden, spoke out out against the changes to the bill in a press statement, writing they’d worsened a bill he already opposed as a surveillance bill in the guise of cybersecurity protections.
Senator Richard Burr, who had introduced the earlier version of bill, didn’t immediately respond to a request for comment.
“Americans deserve policies that protect both their security and their liberty,” he wrote. “This bill fails on both counts.”
Why was the CISA included in the omnibus package, which just passed both the House and the Senate? Because any “nay” votes – or an Obama – would also threaten the entire budget of the federal government. In other words, it was a question of either Americans keeping their privacy or halting the funding of the US government, in effect bankrupting the nation.
And best of all, the rushed bill means there will be no debate.
The bottom line as OTI’s Robyn Green said, “They’ve got this bill that’s kicked around for years and had been too controversial to pass, so they’ve seen an opportunity to push it through without debate. And they’re taking that opportunity.”
The punchline: “They’re kind of pulling a Patriot Act.”
And when Obama signs the $1.1 trillion Spending Bill in a few hours, as he will, it will be official: the second Patriot Act will be the law, and with it what little online privacy US citizens may enjoy, will be gone.
If the government has anything to do with it, privacy just got a whole lot worse. What the draft Investigatory Powers Bill holds for everyone is exactly what all the criticism was about in the first place – except it attempts to make it lawful.
The Bill proposes that instead of deceiving every citizen in the UK it will now simply admit to carrying out mass collection of our data and still hack into and bug our personal devices such as computers and smart phones. It’s the Snoopers Charter but as previous attempts were rejected, the Home Secretary is having another stab at it. Same thing, different words.
We already knew that the security services were doing this but instead of storing all this information, they now propose that ISP’s or internet service providers capture, or steal – depending on your view, every single online movement you make and hand it over to whoever, whenever. Specifically, the security services, the police and “other public bodies”.
We are talking about every single internet user as well, no exceptions; your mum, aunty, granny and your little daughter. Well, just one lucky little category escapes this Orwellian occupation of personal space …. politicians. Of course! And who is the only person who can authorise the hacking of their data – the prime minister. Sounds ominous doesn’t it!
Theresa May went as far as to say that she had ‘engaged’ prior to preparing the draft with certain civil liberty groups. Who exactly, because all the civil liberties groups in the UK have gone to great efforts to bring back some of our hard fought-for liberties that were stolen whilst the perpetrators were hiding from public view.
The UK not now only has the worlds most sophisticated mass surveillance system, confirmed by the fact that the US uses it because it cannot legally do the same on its own territory, but we now have a system that every despot and dictator since Augustus Caesar founded the Roman empire would aspire to having – literally.
The draft Bill provides the power to require ISPs to retain data, but a warrant with judicial oversight is required before that data can be handed over, or for an ISP to assist in a targeted interception. That would be fine but the government didn’t seek to have warrants when it was legally required before it got found out – why does anyone think they should be trusted in future. And why would anyone trust Judicial oversight. One only has to think of judges such as Baroness Butler-Sloss and government stooges like Fiona Woolf and the wide-ranging inquiry into child abuse claims, along with those conveniently lost documents and other such cover-ups.
And what of the ISP’s and communication providers. The non stop data breaches of so-called secure servers was called heavily into question not by the huge data losses by many structural corporate suppliers such as banks and utility companies but by TalkTalk, who astonishingly, have managed not only to be hacked three times in one year, but couldn’t be bothered to protect their customers from a bunch of bored teenagers using a rudimentary security hack who took the personal details of 160,000 customers (again).
Vodafone was hacked only last month. Again customers bank details were stolen. In February, GCHQ and the US National Security Agency hacked into the internal network of the largest makers of mobile phone SIM cards in the world in order to steal encryption keys and compromise the security of mobile phones on the Vodafone, EE and O2 networks. BT phone lines have been hacked, and if you want to learn how to hack Sky routers HERE’s the information freely available online. Frankly, if you’re not a hacker or not been hacked, you’re a no-one nowadays.
Special protections are supposedly provided for certain professions such a journalists, whose need to protect their sources is recognised. But that didn’t stop two police forces breaching a new revised code of practice into protection of journalists’ sources a few months back, this according to a report by the interception of communications commissioner as reported by the Guardian.
Surprisingly, there is little of David Cameron’s efforts to completely ban encryption. There’s a reason (apart from the one where they’ve already stolen all the encryption keys).
It clearly states that there will be no additional requirements in relation to encryption beyond those in the existing RIPA legislation, which it defines as requiring Communication Service providers (CSP’s) “to maintain permanent interception capabilities, including maintaining the ability to remove any encryption applied by the CSP“.
Presumably, maintaining a ‘permanent interception capability’ simply means that ‘back doors’ to encrypted data must be maintained by the CSP’s. There is no good argument for encryption backdoors and they won’t make us safer from terrorism. They might do the opposite and many technical experts agree.
As civil liberties group Liberty points out – “Under RIPA hundreds of public bodies have access to the last three types of surveillance (covert surveillance, informants, undercover operatives, communications including emails, calls and websites) including over 470 local authorities. Surveillance can be authorised for a wide range of purposes which includes such vague purposes as preventing ‘disorder’ or collecting tax.”
To put this into perspective, the 470 local authorities and public bodies that used the RIPA laws, laws designed to catch terrorists, used them to hunt down the non payment of BBC licence fees. In also emerged that more than half of councils in England were using anti-terror laws to spy on families suspected of “bin crimes“. In another, Poole Borough Council admitted to spying on a family for nearly three weeks to find out if they were lying about living in a school catchment area using RIPA laws.
So extreme have these ‘public bodies’ become that in Scotland, the anti-terror legislation has been used hundreds of times over the last 3 years for minor offences like parking breaches, dog fouling and even monitoring for underage sun-bed use.
This total abuse of laws by government ‘bodies’ is being used for nothing more than population control. The system that the government seeks to adopt does not go much less than the most repressive regimes in the world would attempt to achieve.
If it was not for the likes of Edward Snowden, we would not be having this debate as the government would not have allowed it in the first place and therefore any oversight should be independent of government influence.
In the 12 years preceding the Invasion of Iraq, 65 people in Europe were killed by various ‘terrorist’ attacks, mainly in France, Italy and Greece. In the 12 years since that fateful invasion, the terrorists kill rate has increased by nearly 600%. Far from making its citizens safer, politicians have achieved the opposite.
If the escalation of terrorism in Europe says anything at all, it is that draconian mass population surveillance such as these proposals do not work. With all the surveillance in the world, the so-called Paris mastermind is still on the loose. Don’t forget it took 8 years for the might of the USA to hunt down an old man with a beard living in a normal house next to a military base suffering from kidney problems, low blood pressure, an enlarged heart and serious case of dry skin – called Osama Bin Laden. Was it because, as the worlds most hunted terrorist of all time, he decided not to use the internet? Clever him.
It does however, say an awful lot about how our politicians are thinking when it comes to our safety, security, freedom and democracy though, all of which are compromised by powers such as these being given to the authorities.
As promised earlier in the year, the Conservative government is granting British spy agencies explicit rights to hack into smartphones and computers. Set to be introduced by Parliament next month, the forthcoming Data Retention and Investigatory Powers Act (DRIPA) will provide a legal basis for intelligence agencies to hack into computerised systems throughout the U.K.
Own a smartphone? Ever buy things online? Use social networks? Chances are that your data has passed through U.K. Government Communication Headquarters (GCHQ) surveillance programmes — particularly if you are a foreign national.
According to the Independent, spy agencies will be able to take over a phone remotely and install software that has the ability to examine your data at any time. Rushed through Parliament in July 2014, the new bill enables the Home Secretary to order communication companies to retain emails, calls, texts, and web activity of everyone in the U.K. for 12 months. Similar powers could also be used to target other databases, such as medical, travel, and financial records — including the records of those whose communications are deemed confidential, such as doctors, lawyers, journalists, and MPs.
Privacy International has taken British spy agencies to court over bulk data-harvesting. Earlier this year, Deputy Director Eric King, said: “Secretly ordering companies to hand over their records in bulk, to be data-mined at will, without independent sign off or oversight, is a loophole in the law the size of a double-decker bus.”
He added, “Bulk collection of data about millions of people who have no ties to terrorism, nor suspected of any crime is plainly wrong. That our government admits most of those in the databases are ‘unlikely to be of intelligence value’ but that the practice has been allowed to continue, shows just how off course we really are.”
During a recent interview with Amnesty International, whistleblower Edward Snowden was asked what he would say to those who say they have nothing to hide and mass surveillance doesn’t matter:
“It’s not about having nothing to hide, it’s about being you. It’s about being friends with who you want to be friends with, without worrying about what it looks like on paper or inside some private record in some dark government vault,” he said.
“It’s about realising there’s a reason we close the bathroom door. There’s a reason we don’t want the police to have a video camera where they can watch us while we’re sitting in the bubble bath. There’s a reason everybody gets so concerned about the Samsung TV that’s recording what you say in your living room, and then sending it to third parties. This is what you’re going to get. You’re not going to watch TV any more. TV is going to watch you.”
Asked if he had any regrets, he said he had one — that he should have come forward sooner.
“Had I done so, I think we would have a much greater degree of liberty in our online lives. Because the biggest challenge we face in reforming these surveillance programs is that, once the money has been spent, and once the practices have been institutionalized in secret, without the public knowing, it’s very difficult to change them,” he said.
The terms cyber war and infowar have been a constant in many articles written about the conflict in Ukraine. The problem with the terms is that the concepts are so new that definitions vary from an ignorant “troll” rant to a hacker that destroys the controls on a dam. The troll is an annoyance. The dam that burst that kills hundreds of people in their sleep is not. The military definition of a cyber attack revolves around real world injury, death, or damage.
Then, there is an in-between world where most freelance cyber mercenaries work. Their job is to get as close to the threshold of an obvious cyber attack as they can without crossing that line. They are the freelance contractors that countries like Ukraine are hiring to find and target enemies (any person not supportive of Ukrainian Nationalism or taking what appears to be a pro-Russian stance).
Hiring freelancers gives them a veneer of plausible deniability for the consequences and responsibility. The means, methods, and anonymity of cyber do the rest.
In the early 2000’s, cyber freelancer Aaron Weisburd pioneered using cyber and coordinating online/offline attacks on activists, journalists, and alternate media websites. Early in the decade he found out by throwing around terms like “supporting terrorists” he could get internet providers to shut down websites. He could get employers to fire employees. His group could force social and civic groups to shun his victims. After all, who wants to consort with “terrorists.” Weisburd found out he could even get local banks to close checking accounts. He did this by networking with a few thousand like-minded people that hacked social accounts and planted “evidence,” and complained about his victims to Homeland Security and the NSA.
Today he works with the Ukrainian Information Ministry and Ukrainian Ministry of Defense with the Peacekeeper project. With his current employ, Weisburd has scaled up his operation. Just in Ukraine Peacekeeper has over 40,000 people working on the project. Ukraine intends to be the world leader for destroying nonconformity to “anti-western” thought.
The methods he employs are considered to be crimes under international law. When applied to a conflict like the Ukrainian war his methods fall under crimes against humanity according to the Tallinn Manual on Cyber War. The reason according to the manual which is defining Cyberwar is that you can not attack civilians or non-combatants. Even if the attack is non-kinetic(direct), attacking civilians is an act of war.
The damage to people and property is no less real than a dam bursting. But because it starts in the cyber world, the locations of the victims can be thousands of miles apart. It makes it very difficult to piece together the connections and tie all the victims to the same event (planned cyber attack) even if you are looking for it.
The social fabric of the internet is what makes this possible. How many people online are “good friend” that you have never met? This still new online phenomena is the area that the cyber-merc exploits.
What does it mean to be attacked by freelance cyber-mercs? Antiwar.com readers are about to find out.
On October 13th, 2015 Justin Raimondo published an article at Antiwar titled “The New McCarthyism.” The article very briefly mentions Weisburd in a not so flattering light.
“The sheer kookiness of this anonymous obsessive is truly a sight to behold: here is his diagram of “problematic social networks” of alleged “Kremlin agents.” One imagines he stayed up all night working on it, crouched over his computer, his eyes gleaming with fanatic energy,…”
The question is if the troll Weisburd had a problem with Justin Raimondo’s article, why not write him directly? If Weisburd were just an annoying troll, why not write “trollish” comments?
Antiwar.com has a daily readership of over 30,000 people. It’s metrics are superb. Google ranks it as a PR 6 which puts it in an enviable and authoritative position in search results. It all sounds untouchable by an internet troll. But wait…
“Weisburd has not merely “dismantled” websites. He has harassed individuals engaged in perfectly legal online dissent, threatened their family members, harassed their employers, and harassed their web hosts. He regularly uses lies, disinformation and threats to accomplish these goals. Weisburd decides what is “threatening.” He considers all effective dissent threatening. Many of Weisburd’s “foes” are innocent Americans exercising their right to free speech…”
In the same time frame as Antiwar’s anti-neo-McCarthism article and apt description of the kooky, zagnut Weisburd and his gleaming eyes, I had just published a second article on what A Weisburd has been doing with his time. On a personal note, I like Justin Raimondo’s description much better. To keep this article focused links to Weisburd, cronies, and crimes will be at the end.
“When I say ‘engaged’ I mean really engaged. The link to this site is the word ‘here’ in the paragraph above. Nevertheless I collected a sample of 50+- IP addresses. Thank you Mr. Justin, you are an eminently useful idiot...Also, many (most?) of the US readers were at work when they visited antiwar.com. Those US readers are concentrated in New York metro, Washington DC, greater Boston, the Bay Area, and Illinois (Chicago and main campus, U of I). Meanwhile, the Russian readers (there are only two in the dataset) are split between Moscow and Saint-Petersburg.”- Andrew Aaron Weisburd @webradius
Geolocation only has two purposes. The first is to stalk you. The second is to target you. Which do you suppose is happening at Antiwar.com?
What was the neo- Stalinist Justin Raimondo’s great crime? He ended the article on this thought.
“Do we really want to relive that era of repression, scare-mongering, and ideological conformity? Or can we have a real discussion about what a rational policy toward Russia ought to look like?”
If readers at Antiwar.com start having the same problems that readership at Indymedia had, perhaps its time for a class action law suit.
Or perhaps its time for a class action law suit to stop Weisburd’s employer which is the government of Ukraine from doing this in the first place. When you understand the facts and all the readers targeted (people) and websites (business) are lumped together in the US, or EU, or Canada, it crosses the threshold of cyber war as an act of war.
Weisburd and his colleagues are currently soliciting for other “experts” to join them in this criminal enterprise. It’s time to stop them.
Mr. Weisburd, Mr. Harding, every time I think of either of you one movie line crosses my mind and I smile.
“You gonna get used to wearing them chains after a while, [Aaron]. Don’t you never stop listening to them clinking, ’cause they gonna remind you what I been saying for your own good…What we’ve got here is failure to communicate.Some men you just can’t reach. So you get what we had here last week, which is the way he wants it. Well, he gets it. I don’t like it any more than you men.”Cool Hand Luke +