Europe Is About To Create A Link Tax: Time To Speak Out Against It
from the speak-up dept
We’ve written plenty of times about ridiculous European plans to create a so-called “snippet tax” which is more officially referred to as “ancillary rights” (and is really just about creating a tax on Google). The basic concept is that some old school newspapers are so lazy and have so failed to adapt to the internet — and so want to blame Google for their own failures — that they want to tax any aggregator (e.g., Google) that links to their works with a snippet, that doesn’t pay for the privilege of sending those publishers traffic. As you may remember, Germany has been pushing for such a thing for many, many years, and Austria has been exploring it as well. But perhaps the most attention grabbing move was the one in Spain, which not only included a snippet tax, but made it mandatory. That is, even if you wanted Google News to link to you for free, you couldn’t get that. In response, Google took the nuclear option and shut down Google News in Spain. A study showed that this law has actually done much to harm Spanish publishers, but the EU pushes on, ridiculously.
As discussed a year ago, some in the EU Commission are all for creating an EU-wide snippet tax, and as ridiculous and counterproductive as that is, the Commission is about to make a decision on it, and the public consultation on the issue is about to close (it ends tomorrow). Thankfully, many, many different groups have set up nice and easy systems to understand and respond to the consultation — which you should do. Here are just a few options:
There’s also a good detailed discussion of why this snippet tax is the wrong solution from European copyright lawyer Remy Chavannes. Here’s just a… um… snippet (that I didn’t pay for):
In fact, there is precious little indication that the challenges currently being faced by press publishers are due to the lack of sufficiently broad intellectual property rights. And if insufficient IP rights are not a significant part of the problem, increasing IP rights is unlikely to be a significant part of the solution. At a recent conference in Amsterdam, speakers from publishers, academia, politics, civil society and the internet sector were in near-total agreement that a neighbouring right for publishers would solve nothing at best. It would seem more fruitful to investigate other ways in which the position and prospects of publishers of quality journalism can be increased, e.g. via subsidies, tax facilities, the partial repurposing of public broadcasting funds, or other measures that reflect the significant value to a democratic society of having a vigorous, free and independent press.
Implementation of a neighbouring right would bring significant uncertainty, costs and risks, not just to authors and publishers, but also to the eclectic group of platforms, intermediaries and other service providers that play a role in facilitating the publication, discovery and consumption of press content. Larger, existing broad-based platforms will be incentivised to reduce or remove service features that might trigger the new neighbouring right. New entrants are likely to be discouraged, particularly new entrants who want specifically to serve the market for finding and consuming press content. Depending on the scope of any neighbouring right, moreover, it could also negatively impact providers of social networks as well as providers of access, caching and hosting services. Increasing costs, complexity and uncertainty for such a broad category of service providers threatens the free flow of information and investment in – and availability of – innovative digital services, as well as the commercial prospects for publishers and authors.
Good stuff, and I urge you to read the whole thing — and to respond to the consultation before the EU Commission destroys the link.
Beginning June 5th 2013, a series of explosive articles ran in The Guardian (and subsequently a handful of other newspapers/magazines) detailing a vast web of global surveillance (engineered by the U.S. National Security Agency and U.K. partner GCHQ). The revelations were backed by large troves of primary information (code-names/programme descriptions) and internal documents (charts and diagrams) apparently directly sourced from the NSA.
A storm of controversy soon erupted over the breadth and ubiquity of this global surveillance. Forthcoming details on the myriad of previously secret programmes made it clear that email, text, phone data and communications were being scooped-up, recorded and analysed on a mammoth and almost unimaginable scale around the world.
On June 9th, 4 days after the earth-shaking leaks began, the then 29 year-old Edward Snowden identified himself as the source of the leaks. Secreted in a Hong Kong hotel room, Snowden volunteered his motives and personal history to a voracious media and public. What followed in the succeeding 2 weeks resembled an international spy-thriller, as Snowden fled from one safe-house to another throughout Hong Kong, always one step ahead of the press and (presumably) U.S. law enforcement.
The details are sometimes contradictory, but apparently Snowden then boarded a flight from Hong Kong June 23rd en route (via Moscow and Havana) to safe haven in South America. Oddly, sometime during that flight the U.S. government revoked Snowden’s passport, causing him to be stranded in Moscow’s Sheremetyevo International Airport. After a lengthy period (somehow, and somewhat miraculously, avoiding both assassins and journalists for over a month) Snowden received legal asylum and left the airport to begin a new life in the Russian Federation.
Meanwhile, various news outlets continued a drip-feed of dramatic and ‘Orwellian’ revelations.
Snowden had become an iconic figure. Celebrated by ‘progressives’ as a whistleblower and hero, derided by ‘conservatives’ as a traitor and fugitive – he lives presently (we’re told) with his girlfriend in Russia, and appears (sporadically) as an advocate of communications privacy and government accountability.
Further theatrics were provided by the incidents of an Ecuadorian Presidential plane being forced to land, numerous international political leaders’ communications being routinely tapped and fierce debate about the probity of Snowden’s actions and the actual spying regime he exposed. American conservatives and pundits denounced his ‘treason’ and pleaded for his ‘extrajudicial assassination’ while others hailed his patriotism.
It was a thrilling, captivating and microscopically reported tale.
Yet somehow…it doesn’t quite stack up. Some thread of doubt remains, some scent of faint incredulity lingers.
Questions provoked by the official narrative are partly logistical, partly philosophical and decidedly pragmatic.
For starters: are we really to believe (especially in light of his own revelations of an all-pervasive clandestine surveillance regime) that Snowden, after booking a flight to Hong Kong (and soon after – numerous hotel rooms) all admittedly on his own credit card, could not be immediately traced and apprehended (or ‘neutralised’) shortly after (assumedly) the entire U.S. security apparatus had been alerted to his actions and movements? Is it really plausible that possibly the world’s most wanted man (at that moment) could just ‘go-to-ground’ and evade the ‘all-seeing-eye’ for a full fortnight in a cosmopolitan and highly-accessible city?
Some sources report that Snowden gave up his rental home in Hawaii (as he was ostensibly ‘transferring jobs’) just days before he ‘fled’ to Hong Kong and global infamy. How convenient.
Snowden also comes from a family steeped in security state nomenclature. His grandfather was a rear-admiral and subsequently a senior FBI official (present at the Pentagon on September 11th 2001) while apparently “everybody in my family has worked for the federal government in one way or another.” Snowden himself enjoyed stints at the CIA and NSA before landing at defence contractor Booz Allen Hamilton. Surely it would be starkly traumatic for one so tethered to the military-industrial-complex, to suddenly turn ‘traitor.’
Still other questions rudely interrupt the ostensibly chivalrous tale.
To put it bluntly, Snowden is possibly just a little too young to be a convincing whistleblower. 29 year-old whistleblowers are statistically a rare thing indeed. By definition – zealots must start with zeal. Only over time is it plausible for the zealot to become wizened by the ugly machine of which he is but a cog. Just a handful of years before turning tumultuous ‘whistleblower’ Snowden was to be found on internet tech-forums waxing enthusiastically about the security state. His ‘gestation’ from true-believer to ground-quaking operative seems unusually and unconvincingly brief.
Fellow whistleblower William Binney is more likely (at least by age) to be the real deal. Over three decades in spy-craft he reportedly became increasingly frightened by the metastasising spectre of the national-security-complex. His revelations, while similar in tone to Snowden’s and predating them by over a decade, were greeted with little fanfare (and considerable personal harassment and marginalisation).
By contrast, Snowden was granted immediate and enthusiastic access to the most venerated organs of ‘controlled opposition’ and officially sanctioned stenography. Each outlet sticking dutifully to their established charter and brand demographic.
While (by some sleight-of-hand) still able to present itself as ‘progressive’ and ‘independent’, the New York Times is neither. Socially liberal yet aggressively war-like in foreign policy tastes (just how elites like it), the NYT has led the charge to countless illegal and immoral invasions/wars/actions and interventions, baying for rivers of blood from Iraq to Syria and beyond.
Likewise, the U.K Guardian gives oxygen to a raft of somewhat nebulous social concerns with po-faced righteousness, while yet being a clamorous cheerleader for bombing and murder from Libya to Ukraine (how many times can one newspaper repeatedly invent the ‘Russian invasion of Ukraine’ and retain any kind of credibility?).
Similarly, there is something decidedly absurd about the pretence of exclusive Snowden techno-anarchist sound-bites gracing the pages of neocon-beltway-bible The Washington Post.
And yet those glorified minarets of state/private propaganda champion a supposedly dangerous traitor/whistleblower absconded into enemy territory? It doesn’t add up.
Indeed, The Guardian tasked one of its most voracious experts in officially-sanctioned fellatio (Luke Harding), to mint the approved novelisation of poster-boy Snowden’s exploits. Harding’s long stint of feeble, flaccid journalism in thrall to MI6 and deep-state enabling has finally found just recompense in a big-time Hollywood pay-cheque (his book adapted for Oliver Stone’s forthcoming Snowden biopic).
As a blunt instrument of propaganda, Clint Eastwood’s “American Sniper” might indeed make Leni Riefenstahl blush, but could the Snowden gambit be a far more insidious and subtle secret-state strategy?
In purely practical terms alone, the ‘Snowden revelations’ have been an unmitigated victory for the national security state. A global public that was previously blissfully unaware of its position as central target of mass surveillance has now been thoroughly (and generally, comfortably) acclimated to that very idea. A raft of recent studies conclude that the Snowden revelations have had a marked chilling effect on people’s online habits and expressions of dissent.
Indeed, for a permanent cyber-Panopticon to be truly effective as a means of social control, the inmates (the global public) must be at least peripherally aware of its existence. Assuming it does actually exist and one of its aims is (logically) the abortion of popular dissent (through mass scale self-policing), a gargantuan surveillance apparatus also has clear uses as a giant blackmail machine (this would neatly explain the perpetually compliant response from the legislature and judiciary) and as a profound and unimaginably effective tool of social engineering.
Perhaps we are already there? Various leaks about Facebook and the Pentagon’s partnered experiments in ‘crowd herding’ and ‘emotional contagion,’ along with the underreported long-term history of tech corporations (Google, Microsoft, Facebook etc.) co-parenting with the NSA-CIA-Pentagon-DARPA nexus, hint that the entire electronically mediated womb-environment of today might just be one vast dark Psy-Op (interestingly, Vladimir Putin once referred to the internet as a ‘CIA Project’).
Software already exists to constantly monitor social media, analyse (in real-time) public trends and responses, and generate automatic (i.e robotic) comments/posts supporting (or denigrating) a chosen policy/worldview/opinion/initiative/product. We know (ironically largely via Edward Snowden), that our rogue intelligence agencies have been busy launching battalions of cyber-warriors and studying the psychology of online relations and the very architecture of our intrinsic belief systems.
After endless reams of circus commentary and vast volumes of hot air, the net result of the Snowden saga has in fact been the legitimation, legalisation and expansion of the very same unwarranted, unconstitutional, unnecessary (and surely intrinsically illegal) indiscriminate surveillance regime.
‘Mission creep’ has become a stampede, as supine governments rush a candied ‘national security’ wish-list of mass surveillance (and police state) initiatives past a bewildered and disenfranchised public. Nowhere is this more rudely obvious than in Australia, Canada, the U.K and the U.S itself, all of which have increased the state’s options for surveillance and data retention in the months since the ‘Snowden revelations’ (while performing a pantomime of ‘debate’ and ‘consultation’).
The ‘terrorist’ bogeyman (looking understandably tired and unconvincing) has been trotted out yet again to justify all this breathless chicanery. That these nations are all working from the same international (intelligence agency?) playbook seems in little doubt – the timing, wording and circumstances of (for example) recent surveillance ‘reforms’ in Australia, Canada and France being so strikingly similar. Likewise, a similar series of dubious provocations, sieges and ‘terrorist’ attacks predictably and magically manifested themselves just prior to the legislation being tabled – the public must, of course, be cajoled in the right direction.
Is it not possible that we have been completely gamed? The mysterious and messianic figure of Edward Snowden, introduced to acclimatise the global public to the very idea of an endless, all-pervading surveillance state (entirely unaccountable with unstated goals and limitless technology). Snowden as ‘progressive’ Trojan Horse (perhaps much like Barack Obama before him) to activate and mobilise the public passion, only to see it hijacked and channeled into Room 101. After much ‘debate’ from captured politicians and a puppeteer punditry the (entirely noxious) ‘security regime’ is solidified and expanded – the illusion being, that ultimately ‘democracy’ functioned and the population actually ‘chose’ omniscient observation – for the ‘greater good.’
Snowden himself perhaps reminds one of an articulate Lee Harvey Oswald-like character, a brave young patriotic warrior in deep-cover embrace with the Russian bear, dancing a dangerous and duplicitous deep-state deception. Knowingly (or unknowingly) a tool of clandestine forces. Snowden should bear in mind that he too, if he outlives his usefulness, might be thrown to the lions (just like Oswald was).
Imagine for a moment that the Snowden saga is a test. Having built a labyrinthine structure for social control (a compliant media and cowered public that cheerfully delivers itself up to enormous data-mining projects like social media): in fact, an almost entire reality-set constructed and delivered electronically – surely one would be tempted to test it? To see if complete movements, debates, paradigms and world-views could be generated out of whole virtual cloth and controlled? A test-tone, a electro-static ripple, a tremulous shock-wave to the online body electric.
Would it really be possible to introduce an idea (global omniscient surveillance) itself intrinsically repugnant, and yet shepherd it through a controlled release (and discourse) to have it ultimately accepted, completely present and yet essentially invisible? To test the various nuances and feedback loops in media (and online social media) that now might just grant remote Panopticon control of an entire population and their ‘internal landscape’? An electronically mediated ‘reality’ where ideas and beliefs are mere manifestations of algorithms and software?
Conservatives, progressives, activists, lethargists – all actors in the traveling circus of ‘representative democracy’ and ‘online society’?
Mass surveillance has, for the larger segment of the U.S. populace, become an integral facet in the illusory feeling of security. But does it serve any purpose at all — other than providing the Surveillance State a handy excuse for keeping tabs on anyone it chooses, while simultaneously quashing every one of our paltry remaining legal rights?
While it may be comforting to feel the overarching blanket of indiscriminate surveillance keeps us all safe from harm, the deaths of at least 50 people in an Orlando nightclub prove indisputably the contrary.
In fact, the National Security Agency and Federal Bureau of Investigation — and, indeed, every agency — attempting to employ the weary excuse they spy on you to keep you safe can be disproven in the events of the early morning hours this past Sunday.
No less than 50 people perished at the hands of at least one gunman in an Orlando LGBTQ-friendly nightclub as they unwound from the week’s stress on Latin night in the early morning hours of June 12. And while foreign news outlets first reported the mass shooting, American media soon caught up to what had taken place on U.S. soil.
Unfolding over a period of hours, Pulse nightclub took to Facebook to sound the alarm, posting, “Everyone get out of pulse and keep running” — as the shooter (or perhaps shooters) mowed down revelers and reportedly took survivors hostage.
In the aftermath of the carnage, several aspects of the attack become startlingly clear.
First, discrepancies in eyewitness’ accounts of unfolding events — such as on-the-spot interviews describing not one, but two shooters — were not slated to hit mainstream headlines.
Second, any number of dragnet, mass surveillance programs — or even those targeting, specifically, ‘questionable’ individuals — had done nothing to foreshadow, much less prevent, the slaughter for the NSA or FBI.
How could that be? How could programs tasked with specifically trawling social media, personal correspondence, and thus profiling individuals most at risk for committing such atrocities, possibly miss the mark — exponentially?
Simple. These programs were never designed to detect, stop, or catch actual terrorists in the first place.
What? Seriously? You mean the government’s welcoming, protective arms did nothing at all to save us?
But in the aftermath of a mass murder event, it’s expected we would all ignore that particularly relevant detail and succumb to further intrusions on our most basic liberties to cozy into the safe blanket of surveillance, which most frequently targets those who stand against the State causing extremism in the first place.
Shortly after this disgusting infringement on the personal freedoms we hold dear, there are calls for stricter strictures on gun control and freedom of association emanating from the mouths of politicians — who, no less, happen to be involved in contentious electoral proceedings. We are, of course, expected to swallow this — no questions asked — as the U.S. government moves to ‘rein in terrorists and their agendas.’
Don’t be fooled. Though the quote by Benjamin Franklin — “Those who would give up essential Liberty to purchase a little temporary Safety deserve neither Liberty nor Safety” — has been so incredibly skewed from its original meaning, the modern understanding holds fast.
When we base the usurpation of freedom on the fleeting comfort provided by the government in times of tragedy and strife, the resultant disavowal of rightful freedom soon follows; and to no laudable ends, whatsoever. Consider recent reports the NSA has expanded plans to use your so-called ‘smart’ appliances against you — and now seeks to expand those programs to include even biomedical devices, like pacemakers.
Consider Americans under consistent, constant scrutiny — as contentiously revealed by Edward Snowden in 2013 — for the basic act of using their cell phones or deeming necessary encrypted email accounts. Or using the phone. Or journalists under the watchful NSA eye. Or, worse, the complete erasure of rights inextricably linked to the same concept of terrorism far too many Americans willingly accept as the root of the entire issue.
We are a nation under attack, indeed, but not by the brown people those in power would have you believe are out to steal our freedoms. No, to the contrary, we are under attack by the very government that would commandeer our basic civil liberties under the all-too false guise the terrorists want what we have.
But we have too little. We have too few of the basic freedoms that once defined us as a people who broke away from the governmental chokehold. What we’re left with, in the meantime, are the scraps and trappings of a liberty so far removed from its original intent as to be ineffectual in preserving the same.
Whatever your opinions, or even assertions, about the events in Orlando — understand — we are gazing over the precipice from whence there exists no ability to return. We have the temporary luxury of gazing expectantly over the edge, or we can pull back the reins that seemingly hold us in place and say, ‘Enough.’
Enough with the facade of programs whose blueprints offer little more than the feeling of safety. Enough with a State so paranoid it seeks to stomp out any opinion in opposition to it. Enough capitulation.
We see you watching. We see you do nothing with said evidence. But most imperative of all, we see you seeing us — to no substantive ends, whatsoever.
Take the admonishments of the State proffered by whistleblowers who see the bigger picture — this will not end well. No matter the hysteria, signing away your rights can do nothing but strip you of power.
Don’t — no matter your apparent, personal justification — allow them to take more than the miles you’ve already voluntarily offered.
From the “Can you hear me now?” Russian president to the deaf, Nobel Peace Prize winner[?] corporate federal United States president, Putin is essentially telling Obama that the blatant lies to Russia, indeed to the American people and world at large, are insanely putting everyone at risk of nuclear war, and Russia is being pushed too far.
The US has been lying to Russia for decades, ever since 1990, when Gorbachev was told there would be no further expansion, no encroachment by NATO upon Russia. Every promise made by the US since then has been broken and disregarded, and now the US is seeking to have Romania and Poland move further toward the point of no return in forcing Russia’s hand.
Putin Warns Romania and Poland Against Installing ABM Missiles
On Friday, May 27th, Russia’s President Vladimir Putin again asserted that American President Barack Obama lies when saying that the reason America’s anti-ballistic missile (“ABM”) or Ballistic Missile Defense (“BMD”) system is being installed in Romania, and will soon be installed in Poland, is to protect Europe from Iranian missiles that don’t even exist and that Obama himself says won’t exist because of Obama’s deal with Iran. Putin is saying: I know that you are lying there, not being honest. You’re aiming to disable our retaliatory capacity here, not Iran’s. I’m not so dumb as to believe so transparent a lie as your assurances that this is about Iran, not about Russia.
Putin says that ABMs such as America is installing, disable a country’s (in this case, Russia’s) ability to retaliate against a blitz invasion — something increasingly likely from NATO now as NATO has extended right up to Russia’s very borders — and that Russia will not allow this disabling of Russia’s retaliatory forces.
He said that “NATO fend us off with vague statements that this is no threat to Russia … that the whole project began as a preventive measure against Iran’s nuclear program. Where is that program now? It doesn’t exist. … We have been saying since the early 2000s that we will have to react somehow to your moves to undermine international security. No one is listening to us.”
In other words, he is saying that the West is ignoring Russia’s words, and that therefore Russia will, if this continues, respond by eliminating the ABM sites before they become fully operational. To do otherwise than to eliminate any fully operational ABM system on or near Russia’s borders would be to leave the Russian people vulnerable to a blitz attack by NATO, and this will not be permitted.
He said: “At the moment the interceptor missiles installed have a range of 500 kilometers, soon this will go up to 1000 kilometers, and worse than that, they can be rearmed with 2400km-range offensive missiles even today, and it can be done by simply switching the software, so that even the Romanians themselves won’t know.”
In other words: Only the Americans, who have designed and control the ABM system, will be able to know if and when Russia is left totally vulnerable. Not even the Romanians will know; and Putin says, “Russia has ‘no choice’ but to target Romania” — and later Poland, if they follow through with their plans to do the same.
By implication, Putin is saying that, whereas he doesn’t need to strike Romania’s site immediately, he’ll need to do it soon enough to block the ABM system’s upgrade that will leave Russia vulnerable to attack and (because of the fully functional ABM) with no ability on Russia’s part to counter-strike.
He is saying: Remove the ABM system, or else we’ll have to do it by knocking it out ourselves.
Putin knows that according to the Article Five, “Mutual Defense,” provision of the NATO Treaty, any attack against a NATO member, such as Romania, is supposed to elicit an attack by all NATO members against the nation that is attacking. However, Putin is saying that, if NATO is going to be attacking Russia, then it will be without any fully operational ABM system, and (by implication) that Russia’s response to any such attack will be a full-scale nuclear attack against all NATO nations, and a nuclear war resulting which will destroy the planet by unleashing all the nuclear weaponry of both sides, NATO and Russia.
Putin is saying that either Romania — and subsequently Poland — will cancel and nullify their cooperation with U.S. President Obama’s ABM installation, or else there will be a surgical strike by Russia against such installation(s), even though that would likely produce a nuclear attack against Russia by NATO, and a counter-strike nuclear attack by Russia against NATO.
When Putin said “No one is listening to us” on the other side, the NATO side, Putin meant: I don’t want to have to speak by means of a surgical strike to eliminate a NATO ABM system, but that’s the way I’ll ‘speak’ if you are deaf to words and to reason and to common decency.
He will not allow the Russian people to become totally vulnerable to a nuclear attack by the United States and its military allies. He is determined that, if NATO attacks Russia, then it will be game-over for the entire world, not only for Russia.
He is saying to Obama and to all of NATO: Please hear and understand my words, and be reasonable, because the results otherwise will be far worse for everyone if you persist in continuing to ignore my words.
Since posting the above, lest the uninformed still believe the US is not guilty of anything against Russia and believes all the elite-sanctioned propaganda mainstream news, here is yet more:
UK To Stockpile Tanks, Heavy Equipment Close To Russia’s Border
And then, as if on cue, NATO made it even more explicit that its primary prerogative remains to provoke Russia into an offensive move, when over the weekend the Times reported that the British military may soon start stockpiling tanks and other heavy equipment in Eastern Europe as part of NATO’s military beef up close to Russia’s border. The decision may come at the upcoming NATO summit in Warsaw in July.
The Patriot Act continues to wreak its havoc on civil liberties. Section 213 was included in the Patriot Act over the protests of privacy advocates and granted law enforcement the power to conduct a search while delaying notice to the suspect of the search. Known as a “sneak and peek” warrant, law enforcement was adamant Section 213 was needed to protect against terrorism. But the latest government report detailing the numbers of “sneak and peek” warrants reveals that out of a total of over 11,000 sneak and peek requests, only 51 were used for terrorism. Yet again, terrorism concerns appear to be trampling our civil liberties.
Ron Wyden, a Senator from Oregon, has been one of the most influential and significant champions of Americans’ embattled 4th Amendment rights in the digital age. Recall that it was Sen. Wyden who caught Director of National Intelligence, James Clapper, lying under oath about government surveillance of U.S. citizens.
Mr. Wyden continues to be a courageous voice for the public when it comes to pushing back against Big Brother spying. His latest post at Medium is a perfect example.
Here it is in full:
Shaking My Head
The government will dramatically expand surveillance powers unless Congress acts
Last month, at the request of the Department of Justice, the Courts approved changes to the obscure Rule 41 of the Federal Rules of Criminal Procedure, which governs search and seizure. By the nature of this obscure bureaucratic process, these rules become law unless Congress rejects the changes before December 1, 2016.
Today I, along with my colleagues Senators Paul from Kentucky, Baldwin from Wisconsin, and Daines and Tester from Montana, am introducing the Stopping Mass Hacking (SMH) Act (bill, summary), a bill to protect millions of law-abiding Americans from a massive expansion of government hacking and surveillance. Join the conversation with #SMHact.
What’s the problem here?
For law enforcement to conduct a remote electronic search, they generally need to plant malware in — i.e. hack — a device. These rule changes will allow the government to search millions of computers with the warrant of a single judge. To me, that’s clearly a policy change that’s outside the scope of an “administrative change,” and it is something that Congress should consider. An agency with the record of the Justice Department shouldn’t be able to wave its arms and grant itself entirely new powers.
Let’s get into the details
These changes say that if law enforcement doesn’t know where an electronic device is located, a magistrate judge will now have the the authority to issue a warrant to remotely search the device, anywhere in the world. While it may be appropriate to address the issue of allowing a remote electronic search for a device at an unknown location, Congress needs to consider what protections must be in place to protect Americans’ digital security and privacy. This is a new and uncertain area of law, so there needs to be full and careful debate. The ACLU has a thorough discussion of the Fourth Amendment ramifications and the technological questions at issue with these kinds of searches.
The second part of the change to Rule 41 would give a magistrate judge the authority to issue a single warrant that would authorize the search of an unlimited number — potentially thousands or millions — of devices, located anywhere in the world. These changes would dramatically expand the government’s hacking and surveillance authority.The American public should understand that these changes won’t just affect criminals: computer security experts and civil liberties advocates say the amendments would also dramatically expand the government’s ability to hack the electronic devices of law-abiding Americans if their devices were affected by a computer attack. Devices will be subject to search if their owners were victims of a botnet attack — so the government will be treating victims of hacking the same way they treat the perpetrators.
As the Center on Democracy and Technology has noted, there are approximately 500 million computers that fall under this rule. The public doesn’t know nearly enough about how law enforcement executes these hacks, and what risks these types of searches will pose. By compromising the computer’s system, the search might leave it open to other attackers or damage the computer they are searching.
Finally, these changes to Rule 41 would also give some types of electronic searches different, weaker notification requirements than physical searches. Under this new Rule, they are only required to make “reasonable efforts” to notify people that their computers were searched. This raises the possibility of the FBI hacking into a cyber attack victim’s computer and not telling them about it until afterward, if at all.
A job for Congress — not the Justice Department
These changes are a major policy shift that will impact Americans’ digital security, expand the government’s surveillance powers and pose serious Fourth Amendment questions. Part of the problem is the simple fact that both the American public and security experts know so little about how the government goes about hacking a computer to search it. If a victim’s Fourth Amendment rights are violated, it might not be readily apparent because of the highly technical nature of the methods used to execute the warrant.
It is Congress’ job to make sure we do not let the Executive Branch run roughshod over our constituents’ rights. That is why action is so important: this is a policy question that should be debated by Congress. Although the Department of Justice has tried to describe this rule change as simply a matter of judicial venue, sometimes a difference in scale really is a difference in kind. By allowing so many searches with the order of just a single judge, Congress’s failure to act on this issue would be a disaster for law-abiding Americans.
When the public realizes what is at stake, I think there is going to be a massive outcry: Americans will look at Congress and say, “What were you thinking?”
By failing to act, Congress is once again demonstrating that it is not just useless, it’s also dangerously corrupt and incompetent.
It was recently reported that the Chicago Police Department has implemented an Orwellian new program that targets innocent citizens based on indicators that they might be a person who has the potential to carry out a crime. Similar to dystopian films like Minority Report, a complex computer algorithm will track and catalog every citizen in the city, and use private data about each person to determine whether or not they could be a potential criminal.
Once an innocent civilian has been labeled as a threat, they are then notified that they have been marked as a potential criminal and that they are now under police surveillance.
This disturbing program has quietly been in place for over three years, and in that time, government agents have visited the homes of more than 1,300 innocent people who had high numbers on the list, to inform them that they are now regarded as potential criminals. According to the New York Times, Police Superintendent Eddie Johnson says that officials this year are stepping up those visits, with at least 1,000 more people.
“We are targeting the correct individuals. We just need our judicial partners and our state legislators to hold these people accountable,” Johnson insisted.
However, activists and advocates of civil liberties are not convinced.
Karen Sheley, the director of the Police Practices Project of the American Civil Liberties Union of Illinois, has pointed out that these innocent people are being flagged based on criteria that haven’t even been publicly established.
“We’re concerned about this. There’s a database of citizens built on unknown factors, and there’s no way for people to challenge being on the list. How do you get on the list in the first place? We think it’s dangerous to single out somebody based on secret police information,” Sheley said.
The current program is said to only target individuals who seem to show a high risk of being involved in a shooting. However, it is also important to point out that most laws, especially the bad ones, aren’t even focused on primary violations of life or property, but are instead focused on secondary actions that are seen as causal factors for these violations.
It certainly should be illegal to harm people or their property, but most modern societies, in an apparent attempt to take preventative measures, have outlawed actions that could be a precursor to actual criminal activity.
Some have referred to this concept as “pre-crime.” The idea is that people should be punished if they behave in a way that someone else is uncomfortable with, even if they have not harmed anyone.
These types of laws would include: all drug laws, all gun laws, seatbelt laws, intellectual property and other victimless, non-violent crimes, where no person has been harmed, and no property has been stolen or damaged.
Drugs are illegal, we are told, because their use could lead to actual crime. Guns are highly restricted because someone could get hurt. Seatbelt laws are imposed because someone could get hurt. And, intellectual property is imposed because someone may lose their investment. The arguments in favor of these laws are all overblown or flat out wrong, but the fear of future crime is always used to justify bad laws that have no basis in justice or restitution.
Our entire justice system is made up of this nonsense, which persecutes people who have not hurt anyone or anything because their actions apparently indicate that they will do something harmful in the future. To begin to target individuals before they have even done anything is taking this idea of pre-crime a step further, ushering in a new age of Orwellian surveillance.
John Vibes is an author and researcher who organizes a number of large events including the Free Your Mind Conference. He also has a publishing company where he offers a censorship free platform for both fiction and non-fiction writers. You can contact him and stay connected to his work at his Facebook page. You can purchase his books, or get your own book published at his website www.JohnVibes.com. John Vibes writes for TheFreeThoughtProject.com, where this article first appeared.
Amidst a global media blackout of Anonymous’ ongoing worldwide attacks on the “corrupt banking cartels,” the hacking collective has now taken down some of the most prestigious institutions in global governance. OpIcarus has recently taken offline the World Bank, the New York Stock Exchange, five U.S. Federal Reserve Banks and the Vatican.
After announcing a global call to arms against the “corrupt global banking cartel,” the hacker collective, known as Anonymous, in conjunction with Ghost Squad Hackers, have taken over 30 central banks offline, including striking many targets at the heart of the Western imperialist empire.
An Anonymous press release explained the intention behind the operation:
The banks have been getting away with murder, fraud, conspiracy, war profiteering, money laundering for terrorists and drug cartels, have put millions of people out on the street without food or shelter and have successfully bought all our governments to help keep us silenced. We represent the voice of the voiceless. We are uniting to make a stand. The central banks which were attacked in recent days were attacked to remind people that the biggest threat we face to an open and free society is the banks. The bankers are the problem and #OpIcarus is the solution.
Operation Icarus was relaunched in conjunction with a video release announcing the beginning of a “30-day campaign against central bank sites across the world.” Since that time, the scope and magnitude of the attacks have increased exponentially, with Anonymous, Ghost Squad Hackers, a number of Sec groups and BannedOffline coordinating attacks — each focusing on separate financial institutions in an effort to maximize the number of targets hit.
In a previous interview with the Free Thought Project, an Anonymous representative clarified that the operation is in no way intended to impact individuals’ accounts held within the banks, explaining that OpIcarus is directed solely at the 1% perpetuating injustice:
We would just like to make it very clear that all targets of #OpIcarus have been Rothschild and BIS central owned banks. In fact most of the targets so far such as Guernsey, Cyprus, Panama, Jordan, British Virgin Isles, etc are in the top 10 places of tax havens for the elite. No on-line consumer accounts were harmed, no ATM’s were blocked and no personal client data was leaked. This has been a protest against the Central Banks and the 1% — no innocent or poor people were harmed.
The operation began with an initial attack on the Central Bank of Greece and was quickly followed up with a similar DDoS attack on the Central Bank of Cyprus. The hackers then targeted the Central Bank of the Dominican Republic, the Dutch Central Bank, the Central Bank of Maldives, and Guernsey Financial Services Commission, according to the official @OpIcarus Twitter account, which has been taken offline — presumably by Twitter. The National Bank of Panama and the Central Bank of Kenya were also reportedly targeted a day later, according to hacking news publication HackRead.
Additionally, reported Ghost Squad Hacker, s1ege also tweeted about taking the Central Bank of Bosnia-Herzegovina offline and provided a screenshot to verify. The Twitter account @BannedOffline also reported the Central Bank of Mexico had succumbed to a DDoS attack by the hacking collective. The online hacktivist groups have continued to conduct a series of high-powered distributed denial-of-service (DDoS) attacks, which forced the website of Central Bank of Jordan, Central Bank of South Korea, Bank of Compagnie, Monegasque and the Central Bank of Montenegro offline.
Last Saturday, hackers conducted a series of 250 Gbps DDoS attacks on the Bank of France, Central Bank of the United Arab Emirates, Central Bank of Tunisia, Central Bank of Trinidad and Tobago and Philippine National Bank. That was followed up by an attack on the Central Bank of Iraq.
The initial attacks were reported in global corporate media, such as Reuters, with subsequent strikes being reported by independent outlets such as the IBTimes and HackRead. The most recent press release from Anonymous summarized some of the attacks of the past week, as global media has curiously stopped reporting on the attacks altogether.
Greetings citizens of the world, we are OpIcarus a collective of citizens from around the globe working on exposing the 1% through the global banking systems. The elite are responsible for the corruption currently taking place in all governments, media, drug cartels, sex trafficking and money laundering. This last week we have had success in taking down the banking systems of the Bank of International Settlements, the World Bank, the Vatican, Morocco, Macedonia as well as the Central Bank of Venezuela. We stand with the people of Venezuela as they protest their corrupt government and all should expect our support through operations during these uprisings wherever they may arise. We are the people, We stand with the people, We support the people.
After announcing OpIcarus at the beginning of May, Anonymous released a list of institutions the collective plans to target, which is divided into four sections; websites associated with the U.S. Federal Reserve, the International Monetary Fund (IMF), sites owned by the World Bank, and over 150 sites associated with national banks around the globe.
In just a few weeks, OpIcarus hackers have hit dozens of financial institutions listed in their online manifesto. Any questions about whether the hacktivists would be able to take out some of the more high-profile institutions seem to have been answered with the recent successful attacks on the World Bank, the U.S. Federal Reserve Banks, the Bank of France and the Bank of England — the central banks of the U.K. and France – and the Vatican.
While some have questioned the effectiveness of OpIcarus, senior director at Corero Network Security, Stephanie Weagle, told Info Security magazine:
While the impact on the individual targets of the DDoS attack campaign, ‘OpIcarus’ is unclear; obstructing or eliminating the availability of email servers is significant. In an online world any type of service outage is barely tolerated, especially in the banking industry where transactions and communications are often time-sensitive, and account security is of utmost importance.
In the world of high finance, time is money and every minute that a bank is forced offline it is losing potential revenue, which in turn hurts the bottom line of those that support the imperial war machine. Thus far, all targeted banks have refused to comment on the damage inflicted by the continuous cyber attacks.
Make no mistake that this operation has already been extremely effective — evolving and growing rapidly. The fact that global corporate media is refusing to report on these numerous high-profile attacks is indicative of the fear the 1% have of OpIcarus garnering massive public support. The attempt to conceal the scope and breadth of this operation from public purview reveals the visceral fear the elite harbor toward those they prey upon. It seems the only thing the ruling class can do now is attempt to conceal and suppress the information about what’s transpiring in hopes of keeping the populace ignorant.
Jay Syrmopoulos is a geopolitical analyst, free thinker, researcher, and ardent opponent of authoritarianism. He is currently a graduate student at University of Denver pursuing a masters in Global Affairs. Jay’s work has been published on TheFreeThoughtProject.com, where this article first appeared, Ben Swann’s Truth in Media, Truth-Out, Raw Story, MintPress News, as well as many other sites. You can follow him on Twitter @sirmetropolis, on Facebook at Sir Metropolis and now on tsu.
The Pentagon is building a ‘self-aware’ killer robot army fueled by social media
Official US defence and NATO documents confirm that autonomous weapon systems will kill targets, including civilians, based on tweets, blogs and Instagram
by Nafeez Ahmed
This exclusive is published by INSURGE INTELLIGENCE, a crowd-funded investigative journalism project for the global commons
An unclassified 2016 Department of Defense (DoD) document, the Human Systems Roadmap Review, reveals that the US military plans to create artificially intelligent (AI) autonomous weapon systems, which will use predictive social media analytics to make decisions on lethal force with minimal human involvement.
Despite official insistence that humans will retain a “meaningful” degree of control over autonomous weapon systems, this and other Pentagon documents dated from 2015 to 2016 confirm that US military planners are already developing technologies designed to enable swarms of “self-aware” interconnected robots to design and execute kill operations against robot-selected targets.
More alarmingly, the documents show that the DoD believes that within just fifteen years, it will be feasible for mission planning, target selection and the deployment of lethal force to be delegated entirely to autonomous weapon systems in air, land and sea. The Pentagon expects AI threat assessments for these autonomous operations to be derived from massive data sets including blogs, websites, and multimedia posts on social media platforms like Twitter, Facebook and Instagram.
The raft of Pentagon documentation flatly contradicts Deputy Defense Secretary Robert Work’s denial that the DoD is planning to develop killer robots.
In a widely reported March conversation with Washington Post columnist David Ignatius, Work said that this may change as rival powers work to create such technologies:
“We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds we will have to make decisions on how we best can compete.”
But, he insisted, “We will not delegate lethal authority to a machine to make a decision,” except for “cyber or electronic warfare.”
Official US defence and NATO documents dissected by INSURGE intelligence reveal that Western governments are already planning to develop autonomous weapons systems with the capacity to make decisions on lethal force — and that such systems, in the future, are even expected to make decisions on acceptable levels of “collateral damage.”
Behind public talks, a secret arms race
Efforts to create autonomous robot killers have evolved over the last decade, but have come to a head this year.
A National Defense Industry Association (NDIA) conference on Ground Robotics Capabilities in March hosted government officials and industry leaders confirming that the Pentagon was developing robot teams that would be able to use lethal force without direction from human operators.
In April, government representatives and international NGOs convened at the United Nations in Geneva to discuss the legal and ethical issues surrounding lethal autonomous weapon systems (LAWS).
That month, the UK government launched a parliamentary inquiry into robotics and AI. And earlier in May, the White House Office of Science and Technology announced a series of public workshops on the wide-ranging social and economic implications of AI.
Most media outlets have reported the fact that so far, governments have not ruled out the long-term possibility that intelligent robots could be eventually authorized to make decisions to kill human targets autonomously.
But contrary to Robert Work’s claim, active research and development efforts to explore this possibility are already underway. The plans can be gleaned from several unclassified Pentagon documents in the public record that have gone unnoticed, until now.
Among them is a document released in February 2016 from the Pentagon’s Human Systems Community of Interest (HSCOI).
The document shows not only that the Pentagon is actively creating lethal autonomous weapon systems, but that a crucial component of the decision-making process for such robotic systems will include complex Big Data models, one of whose inputs will be public social media posts.
Robots that kill ‘like people’
The HSCOI is a little-known multi-agency research and development network seeded by the Office of the Secretary of Defense (OSD), which acts as a central hub for a huge plethora of science and technology work across US military and intelligence agencies.
The document is a 53-page presentation prepared by HSCOI chair, Dr. John Tangney, who is Director of the Office of Naval Research’s Human and Bioengineered Systems Division. Titled Human Systems Roadmap Review, the slides were presented at the NDIA’s Human Systems Conference in February.
The document says that one of the five “building blocks” of the Human Systems program is to “Network-enable, autonomous weapons hardened to operate in a future Cyber/EW [electronic warfare] Environment.” This would allow for “cooperative weapon concepts in communications-denied environments.”
But then the document goes further, identifying a “focus areas” for science and technology development as “Autonomous Weapons: Systems that can take action, when needed”, along with “Architectures for Autonomous Agents and Synthetic Teammates.”
The final objective is the establishment of “autonomous control of multiple unmanned systems for military operations.”
Such autonomous systems must be capable of selecting and engaging targets by themselves — with human “control” drastically minimized to affirming that the operation remains within the parameters of the Commander’s “intent.”
The document explicitly asserts that these new autonomous weapon systems should be able to respond to threats without human involvement, but in a way that simulates human behavior and cognition.
The DoD’s HSCOI program must “bridge the gap between high fidelity simulations of human cognition in laboratory tasks and complex, dynamic environments.”
Referring to the “Mechanisms of Cognitive Processing” of autonomous systems, the document highlights the need for:
“More robust, valid, and integrated mechanisms that enable constructive agents that truly think and act like people.”
The Pentagon’s ultimate goal is to develop “Autonomous control of multiple weapon systems with fewer personnel” as a “force multiplier.”
The new systems must display “highly reliable autonomous cooperative behavior” to allow “agile and robust mission effectiveness across a wide range of situations, and with the many ambiguities associated with the ‘fog of war.’”
Resurrecting the human terrain
The HSCOI consists of senior officials from the US Army, Navy, Marine Corps, Air Force, Defense Advanced Research Projects Agency (DARPA); and is overseen by the Assistant Secretary of Defense for Research & Engineering and the Assistant Secretary of Defense for Health Affairs.
HSCOI’s work goes well beyond simply creating autonomous weapons systems. An integral part of this is simultaneously advancing human-machine interfaces and predictive analytics.
The latter includes what a HSCOI brochure for the technology industry, ‘Challenges, Opportunities and Future Efforts’, describes as creating “models for socially-based threat prediction” as part of “human activity ISR.”
This is short-hand for intelligence, surveillance and reconnaissance of a population in an ‘area of interest’, by collecting and analyzing data on the behaviors, culture, social structure, networks, relationships, motivation, intent, vulnerabilities, and capabilities of a human group.
The idea, according to the brochure, is to bring together open source data from a wide spectrum, including social media sources, in a single analytical interface that can “display knowledge of beliefs, attitudes and norms that motivate in uncertain environments; use that knowledge to construct courses of action to achieve Commander’s intent and minimize unintended consequences; [and] construct models to allow accurate forecasts of predicted events.”
The Human Systems Roadmap Review document from February 2016 shows that this area of development is a legacy of the Pentagon’s controversial “human terrain” program.
The Human Terrain System (HTS) was a US Army Training and Doctrine Command (TRADOC) program established in 2006, which embedded social scientists in the field to augment counterinsurgency operations in theaters like Iraq and Afghanistan.
The idea was to use social scientists and cultural anthropologists to provide the US military actionable insight into local populations to facilitate operations — in other words, to weaponize social science.
The $725 million program was shut down in September 2014 in the wake of growing controversy over its sheer incompetence.
The HSCOI program that replaces it includes social sciences but the greater emphasis is now on combining them with predictive computational models based on Big Data. The brochure puts the projected budget for the new human systems project at $450 million.
The Pentagon’s Human Systems Roadmap Review demonstrates that far from being eliminated, the HTS paradigm has been upgraded as part of a wider multi-agency program that involves integrating Big Data analytics with human-machine interfaces, and ultimately autonomous weapon systems.
The new science of social media crystal ball gazing
The 2016 human systems roadmap explains that the Pentagon’s “vision” is to use “effective engagement with the dynamic human terrain to make better courses of action and predict human responses to our actions” based on “predictive analytics for multi-source data.”
In a slide entitled, ‘Exploiting Social Data, Dominating Human Terrain, Effective Engagement,’ the document provides further detail on the Pentagon’s goals:
“Effectively evaluate/engage social influence groups in the op-environment to understand and exploit support, threats, and vulnerabilities throughout the conflict space. Master the new information environment with capability to exploit new data sources rapidly.”
The Pentagon wants to draw on massive repositories of open source data that can support “predictive, autonomous analytics to forecast and mitigate human threats and events.”
This means not just developing “behavioral models that reveal sociocultural uncertainty and mission risk”, but creating “forecast models for novel threats and critical events with 48–72 hour timeframes”, and even establishing technology that will use such data to “provide real-time situation awareness.”
According to the document, “full spectrum social media analysis” is to play a huge role in this modeling, to support “I/W [irregular warfare], information operations, and strategic communications.”
This is broken down further into three core areas:
“Media predictive analytics; Content-based text and video retrieval; Social media exploitation for intel.”
The document refers to the use of social media data to forecast future threats and, on this basis, automatically develop recommendations for a “course of action” (CoA).
Under the title ‘Weak Signal Analysis & Social Network Analysis for Threat Forecasting’, the Pentagon highlights the need to:
“Develop real-time understanding of uncertain context with low-cost tools that are easy to train, reduce analyst workload, and inform COA [course of action] selection/analysis.”
In other words, the human input into the development of course of action “selection/analysis” must be increasingly reduced, and replaced with automated predictive analytical models that draw extensively on social media data.
This can even be used to inform soldiers of real-time threats using augmented reality during operations. The document refers to “Social Media Fusion to alert tactical edge Soldiers” and “Person of Interest recognition and associated relations.”
The idea is to identify potential targets — ‘persons of interest’ — and their networks, in real-time, using social media data as ‘intelligence.’
Meaningful human control without humans
Both the US and British governments are therefore rapidly attempting to redefine “human control” and “human intent” in the context of autonomous systems.
Among the problems that emerged at the UN meetings in April is the tendency to dilute the parameters that would allow describing an autonomous weapon system as being tied to “meaningful” human control.
A separate Pentagon document dated March 2016 — a set of presentation slides for that month’s IEEE Conference on Cognitive Methods in Situation Awareness & Decision Support — insists that DoD policy is to ensure that autonomous systems ultimately operate under human supervision:
“[The] main benefits of autonomous capabilities are to extend and complement human performance, not necessarily provide a direct replacement of humans.”
Unfortunately, there is a ‘but’.
The March document, Autonomous Horizons: System Autonomy in the Air Force, was authored by Dr. Greg Zacharias, Chief Scientist of the US Air Force. The IEEE conference where it was presented was sponsored by two leading government defense contractors, Lockheed Martin and United Technologies Corporation, among other patrons.
Further passages of the document are revealing:
“Autonomous decisions can lead to high-regret actions, especially in uncertain environments.”
In particular, the document observes:
“Some DoD activity, such as force application, will occur in complex, unpredictable, and contested environments. Risk is high.”
The solution, supposedly, is to design machines that basically think, learn and problem solve like humans. An autonomous AI system should “be congruent with the way humans parse the problem” and driven by “aiding/automation knowledge management processes along lines of the way humans solve problem [sic].”
A section titled ‘AFRL [Air Force Research Laboratory] Roadmap for Autonomy’ thus demonstrates how by 2020, the US Air Force envisages “Machine-Assisted Ops compressing the kill chain.” The bottom of the slide reads:
“Decisions at the Speed of Computing.”
This two-staged “kill chain” is broken down as follows: firstly, “Defensive system mgr [manager] IDs threats & recommends actions”; secondly, “Intelligence analytic system fuses INT [intelligence] data & cues analyst of threats.”
In this structure, a lethal autonomous weapon system draws on intelligence data to identify a threat, which an analyst simply “IDs”, before recommending “action.”
The analyst’s role here is simply to authorize the kill, but in reality the essential importance of human control — assessment of the integrity of the kill decision — has been relegated to the end of an entirely automated analytical process, as a mere perfunctionary obligation.
By 2030, the document sees human involvement in this process as being reduced even further to an absolute minimum. While a human operator may be kept “in the loop” (in the document’s words) the Pentagon looks forward to a fully autonomous system consisting of:
“Optimized platform operations delivering integrated ISR [intelligence, surveillance and reconnaissance] and weapon effects.”
The goal, in other words, is a single integrated lethal autonomous weapon system combining full spectrum analysis of all data sources with “weapon effects” — that is, target selection and execution.
The document goes to pains to layer this vision with a sense of human oversight being ever-present.
AI “system self-awareness”
Yet an even more blunt assertion of the Pentagon’s objective is laid out in a third document, a set of slides titled DoD Autonomy Roadmap presented exactly a year earlier at the NDIA’s Defense Tech Expo.
The document authored by Dr. Jon Bornstein, who leads the DoD’s Autonomy Community of Interest (ACOI), begins by framing its contents with the caveat: “Neither Warfighter nor machine is truly autonomous.”
Yet it goes on to call for machine agents to develop:
“Perception, reasoning, and intelligence allow[ing] for entities to have existence, intent, relationships, and understanding in the battle space relative to a mission.”
This will be the foundation for two types of weapon systems: “Human/ Autonomous System Interaction and Collaboration (HASIC)” and “Scalable Teaming of Autonomous Systems (STAS).”
In the near term, machine agents will be able “to evolve behaviors over time based on a complex and ever-changing knowledge base of the battle space… in the context of mission, background knowledge, intent, and sensor information.”
However, it is the Pentagon’s “far term” vision for machine agents as “self-aware” systems that is particularly disturbing:
•Ontologies adjusted through common-sense knowledge via intuition.
•Learning approaches based on self-exploration and social interactions.
•Behavioral stability through self-modification.
It is in this context of the “self-awareness” of an autonomous weapon system that the document clarifies the need for the system to autonomously develop forward decisions for action, namely:
“Autonomous systems that appropriately use internal model-based/deliberative planning approaches and sensing/perception driven actions/control.”
The Pentagon specifically hopes to create what it calls “trusted autonomous systems”, that is, machine agents whose behavior and reasoning can be fully understood, and therefore “trusted” by humans:
“Collaboration means there must be an understanding of and confidence in behaviors and decision making across a range of conditions. Agent transparency enables the human to understand what the agent is doing and why.”
Once again, this is to facilitate a process by which humans are increasingly removed from the nitty gritty of operations.
In the “Mid Term”, there will be “Improved methods for sharing of authority” between humans and machines. In the “Far Term”, this will have evolved to a machine system functioning autonomously on the basis of “Awareness of ‘commanders intent’” and the “use of indirect feedback mechanisms.”
This will finally create the capacity to deploy “Scalable Teaming of Autonomous Systems (STAS)”, free of overt human direction, in which multiple machine agents display “shared perception, intent and execution.”
Teams of autonomous weapon systems will display “Robust self-organization, adaptation, and collaboration”; “Dynamic adaption, ability to self-organize and dynamically restructure”; and “Agent-to-agent collaboration.”
Notice the lack of human collaboration.
The “far term” vision for such “self-aware” autonomous weapon systems is not, as Robert Work claimed, limited to cyber or electronic warfare, but will include:
These operations might even take place in tight urban environments — “in close proximity to other manned & unmanned systems including crowded military & civilian areas.”
The document admits, though, that the Pentagon’s major challenge is to mitigate against unpredictable environments and emergent behavior.
Autonomous systems are “difficult to assure correct behavior in a countless number of environmental conditions” and are “difficult to sufficiently capture and understand all intended and unintended consequences.”
Terminator teams, led by humans
The Autonomy roadmap document clearly confirms that the Pentagon’s final objective is to delegate the bulk of military operations to autonomous machines, capable of inflicting “Collective Defeat of Hard and Deeply Buried Targets.”
One type of machine agent is the “Autonomous Squad Member (Army)”, which “Integrates machine semantic understanding, reasoning, and perception into a ground robotic system”, and displays:
“Early implementation of a goal reasoning model, Goal-Directed Autonomy (GDA) to provide the robot the ability to self-select new goals when it encounters an unanticipated situation.”
Human team members in the squad must be able “to understand an intelligent agent’s intent, performance, future plans and reasoning processes.”
Another type is described under the header, ‘Autonomy for Air Combat Missions Team (AF).’
Such an autonomous air team, the document envisages, “Develops goal-directed reasoning, machine learning and operator interaction techniques to enable management of multiple, team UAVs.” This will achieve:
“Autonomous decision and team learning enable the TBM [Tactical Battle Manager] to maximize team effectiveness and survivability.”
TBM refers directly to a battle management autonomy software for unmanned aircraft.
The Pentagon still, of course, wants to ensure that there remains a human manual override, which the document describes as enabling a human supervisor “to ‘call a play’ or manually control the system.”
Targeting evil antiwar bloggers
Yet the biggest challenge, nowhere acknowledged in any of the documents, is ensuring that automated AI target selection actually selects real threats, rather than generating or pursuing false positives.
According to the Human Systems roadmap document, the Pentagon has already demonstrated extensive AI analytical capabilities in real-time social media analysis, through a NATO live exercise last year.
During the exercise, Trident Juncture — NATO’s largest exercise in a decade — US military personnel “curated over 2M [million] relevant tweets, including information attacks (trolling) and other conflicts in the information space, including 6 months of baseline analysis.” They also “curated and analyzed over 20K [i.e. 20,000] tweets and 700 Instagrams during the exercise.”
The Pentagon document thus emphasizes that the US Army and Navy can now already “provide real-time situation awareness and automated analytics of social media sources with low manning, at affordable cost”, so that military leaders can “rapidly see whole patterns of data flow and critical pieces of data” and therefore “discern actionable information readily.”
The primary contributor to the Trident Juncture social media analysis for NATO, which occurred over two weeks from late October to early November 2015, was a team led by information scientist Professor Nitin Agarwal of the University of Arkansas, Little Rock.
Agarwal’s project was funded by the US Office of Naval Research, Air Force Research Laboratory and Army Research Office, and conducted in collaboration with NATO’s Allied Joint Force Command and NATO Strategic Communications Center of Excellence.
Slides from a conference presentation about the research show that the NATO-backed project attempted to identify a hostile blog network during the exercise containing “anti-NATO and anti-US propaganda.”
Among the top seven blogs identified as key nodes for anti-NATO internet traffic were websites run by Andreas Speck, an antiwar activist; War Resisters International (WRI); and Egyptian democracy campaigner Maikel Nabil Sanad — along with some Spanish language anti-militarism sites.
Andreas Speck is a former staffer at WRI, which is an international network of pacifist NGOs with offices and members in the UK, Western Europe and the US. One of its funders is the Joseph Rowntree Charitable Trust.
The WRI is fundamentally committed to nonviolence, and campaigns against war and militarism in all forms.
Most of the blogs identified by Agarwal’s NATO project are affiliated to the WRI, including for instance nomilservice.com, WRI’s Egyptian affiliate founded by Maikel Nabil, which campaigns against compulsory military service in Egypt. Nabil was nominated for the Nobel Peace Prize and even supported by the White House for his conscientious objection to Egyptian military atrocities.
The NATO project urges:
“These 7 blogs need to be further monitored.”
The project was touted by Agarwal as a great success: it managed to extract 635 identity markers through metadata from the blog network, including 65 email addresses, 3 “persons”, and 67 phone numbers.
Agarwal’s conference slides list three Pentagon-funded tools that his team created for this sort of social media analysis: Blogtracker, Scraawl, and Focal Structures Analysis.
Flagging up an Egyptian democracy activist like Maikel Nabil as a hostile entity promoting anti-NATO and anti-US propaganda demonstrates that when such automated AI tools are applied to war theatres in complex environments (think Pakistan, Afghanistan and Yemen), the potential to identify individuals or groups critical of US policy as terrorism threats is all too real.
This case demonstrates how deeply flawed the Pentagon’s automation ambitions really are. Even with the final input of independent human expert analysts, entirely peaceful pro-democracy campaigners who oppose war are relegated by NATO to the status of potential national security threats requiring further surveillance.
Compressing the kill chain
It’s often assumed that DoD Directive 3000.09 issued in 2012, ‘Autonomy in Weapon Systems’, limits kill decisions to human operators under the following stipulation in clause 4:
“Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
After several paragraphs underscoring the necessity of target selection and execution being undertaken under the oversight of a human operator, the Directive goes on to open up the possibility of developing autonomous weapon systems without any human oversight, albeit with the specific approval of senior Pentagon officials:
“Autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets… Autonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside the policies in subparagraphs 4.c.(1) through 4.c.(3) must be approved by the Under Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the CJCS before formal development and again before fielding.”
Rather than prohibiting the development of lethal autonomous weapon systems, the directive simply consolidates all such developments under the explicit authorization of the Pentagon’s top technology chiefs.
Worse, the directive expires on 21st November 2022 — which is around the time such technology is expected to become operational.
Indeed, later that year, Lieutenant Colonel Jeffrey S. Thurnher, a US Army lawyer at the US Naval War College’s International Law Department, published a position paper in the National Defense University publication, Joint Force Quarterly.
He recommended that there were no substantive legal or ethical obstacles to developing fully autonomous killer robots — as long as such systems are designed in such a way as to maintain a semblance of human oversight through “appropriate control measures.”
In the conclusions to his paper, titled No One At The Controls: Legal Implications of Fully Autonomous Targeting, Thurnher wrote:
“LARs [lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed. Autonomous targeting technology will likely proliferate to nations and groups around the world. To prevent being surpassed by rivals, the United States should fully commit itself to harnessing the potential of fully autonomous targeting. The feared legal concerns do not appear to be an impediment to the development or deployment of LARs. Thus, operational commanders should take the lead in making this emerging technology a true force multiplier for the joint force.”
The NATO document, which aims to provide expert legal advice to government policymakers, sets out a position in which the deployment of autonomous weapon systems for lethal combat — in particular the delegation of targeting and kill decisions to machine agents — is viewed as being perfectly legitimate in principle.
It is the responsibility of specific states, the document concludes, to ensure that autonomous systems operate in compliance with international law in practice — a caveat that also applies for the use of autonomous systems for law-enforcement and self-defence.
In the future, though, the NATO document points to the development of autonomous systems that can “reliably determine when foreseen but unintentional harm to civilians is ethically permissible.”
Acknowledging that currently only humans are able to make a “judgement about the ethical permissibility of foreseen but unintentional harm to civilians (collateral damage)”, the NATO policy document urges states developing autonomous weapon systems to ensure that eventually they “are able to integrate with collateral damage estimation methodologies” so as to delegate targeting and kill decisions accordingly.
The NATO position is particularly extraordinary given that international law — such as the Geneva Conventions — defines foreseen deaths of civilians caused by a military action as intentional, precisely because they were foreseen yet actioned anyway.
“… making the civilian population or individual civilians, not taking a direct part in hostilities, the object of attack; launching an attack in the knowledge that such attack will cause incidental loss of civilian life, injury to civilians or damage to civilian objects which would be clearly excessive in relation to the concrete and direct military advantage anticipated;… making civilian objects, that is, objects that are not military objectives, the object of attack.”
“… launching an indiscriminate attack resulting in loss of life or injury to civilians or damage to civilian objects; launching an attack against works or installations containing dangerous forces in the knowledge that such attack will cause excessive incidental loss of civilian life, injury to civilians or damage to civilian objects.”
In other words, NATO’s official policy guidance on autonomous weapon systems sanitizes the potential for automated war crimes. The document actually encourages states to eventually develop autonomous weapons capable of inflicting “foreseen but unintentional” harm to civilians in the name of securing a ‘legitimate’ military advantage.
Yet the NATO document does not stop there. It even goes so far as to argue that policymakers considering the development of autonomous weapon systems for lethal combat should reflect on the possibility that delegating target and kill decisions to machine agents would minimize civilian casualties.
A new report by Paul Scharre, who led the Pentagon working group that drafted DoD Directive 3000.09 and now heads up the future warfare program at the Center for New American Security in Washington DC, does not mince words about the potentially “catastrophic” risks of relying on autonomous weapon systems.
“With an autonomous weapon,” he writes, “the damage potential before a human controller is able to intervene could be far greater…
“In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other autonomous weapons of the same type, a military could face the disturbing prospect of large numbers of autonomous weapons failing simultaneously, with potentially catastrophic consequences.”
Scharre points out that “autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces,” due to any number of potential reasons, including “hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors.”
Noting that in the software industry, for every 1,000 lines of code, there are between 15 and 50 errors, Scharre points out that such marginal, routine errors could easily accumulate to create unexpected results that could be missed even by the most stringent testing and validation methods.
The more complex the system, the more difficult it will be to verify and track the system’s behavior under all possible conditions: “… the number of potential interactions within the system and with its environment is simply too large.”
The documents discussed here show that the Pentagon is going to pains to develop ways to mitigate these risks.
But as Scharre concludes, “these risks cannot be eliminated entirely. Complex tightly coupled systems are inherently vulnerable to ‘normal accidents.’ The risk of accidents can be reduced, but never can be entirely eliminated.”
As the trajectory toward AI autonomy and complexity accelerates, so does the risk that autonomous weapon systems will, eventually, wreak havoc.
Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is a weekly columnist for Middle East Eye.
He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work, and was twice selected in the Evening Standard’s top 1,000 most globally influential Londoners, in 2014 and 2015.
Nafeez has also written and reported for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, The Ecologist, Alternet, Counterpunch, Truthout, among others.
He is a Visiting Research Fellow at the Faculty of Science and Technology at Anglia Ruskin University, where he is researching the link between global systemic crises and civil unrest for Springer Energy Briefs.
This story is being released for free in the public interest, and was enabled by crowdfunding. I’d like to thank my amazing community of patrons for their support, which gave me the opportunity to work on this story. Please support independent, investigative journalism for the global commons via Patreon.com, where you can donate as much or as little as you like.
Many people may mistakenly believe that the future is something that others, like big companies or governments usher in and that they themselves play either a minor active role, or one that is entirely passive. In reality, there are already groups of regular people just like you or me around the world literally building the future of their communities themselves with their own two hands and in collaboration with their friends, family, neighbors, and through the power of the Internet, with like-minded individuals around the world.
Above image: Instead of some planned community built by government or developers, we can add a layer of opensource technology over our existing communities, on our rooftops, in our offices, and at existing public spaces or markets. In addition to this added layer of physical technology, a little change in our mindset will go a long way in transforming our communities.
Because of the exponential progress of technology, the impact of small, organized projects is increasing as well. Think about 3D printing and how for many years it remained firmly in the realm of large businesses for use in prototyping. It was only when small groups of enthusiastic hobbyists around the world began working on cheaper and more accessible versions of these machines that they ended up on the desktops of regular people around the world, changing the way we look at manufacturing.
Similar advances in energy production, biotechnology, agriculture, IT, and manufacturing technology are likewise empowering people on a very distributed and local level.
What we see emerging is a collection of local “institutions” giving people direct access to the means to change their communities for the better, bypassing more abstract and less efficient means of effecting change like voting or protesting.
Political processes, however, will become more relevant and practical when people actually have resources and direct hands-on experience in the matters of running their communities. Demanding more of those that represent you will have more meaning when those demands are coupled with practical solutions and enumerated plans of action.
3D printing has come a long way since the first RepRap desktop printers and their derivatives which includes MakerBot’s first designs. 3D printing has gone from an obscure obsession among hobbyists to a mainstream phenomenon that is transforming the way we look at manufacturing.
Let’s explore these “institutions” and see what is possible, what is already being done, and how you can get involved today in physically shaping your community’s future starting today.
A makerspace is exactly what it sounds like: a space where you make things. However, it is often associated with computer controlled personal manufacturing technology like 3D printers, CNC mills, and laser and/or waterjet cutters. There is also a significant amount of electronic prototyping equipment on hand including opensource development boards like the Arduino, which allows virtually anyone to control physical objects in the real world.
A well-equipped makerspace in Singapore.
Makerspaces also generally include a small core team with skills ranging from design and engineering to software development. These teams usually are eager to bring in new people and introduce them to the tools, techniques, and technology they are so passionate about.
Makerspaces already exist around the world and it is very likely that no matter where you live, you have one relatively nearby. Makerspaces hold workshops for both absolute beginners and experienced tech enthusiasts.
Makerspaces hold frequent workshops to share their knowledge and enthusiasm with others, often absolute beginners. There is a good chance your local makerspace has workshops available. Some are even free.
You can prototype virtually anything in a makerspace, making it the perfect place to go when you have a problem and want to develop a practical, tangible solution to solve it. Everything from an opensource solar charger to a new kind of 3D printer could be (and has been) made at a makerspace, making it the perfect nexus for our local community and the variety of other local institutions that may crop up there.
A combination of rediscovered traditional practices combined with modern technology makes local food production both practical and profitable. Community gardens are not uncommon, and there is a growing interest around the world, particularly in urban areas to utilize the sun-soaked rooftops to grow food with which to consume or distribute to local restaurants and markets.
US-based Growing Power proves what communities can accomplish by working together. They have proven that community urban agriculture can be both practical and profitable, with their project becoming not just a local business, but a resource for the community as well.
The Comcrop project in Singapore provides a particularly impressive example, having been in operation for several years now, serving not only as a source of locally produced food for restaurants and grocery stores, but also as a community resources teaching all who are interested how to raise crops in a dense urban environment like that found in Singapore.
Singapore’s Comcrop project has proven that even in the densest of urban environments, agriculture can be carried out by communities for profit, fun, and education. Collaboration with local makerspaces could further enhance their operation’s efficiency.
Another impressive example of local agriculture is US-based Growing Power where greenhouses, vermiculture, and aquaponics are all combined to generate an immense amount of food feeding into a local distribution network the project has diligently developed over the years.
Local food production and distribution is steadily expanding around the world as the concept of farmers’ markets spread and entire communities of both producers and consumers connect in a much more relevant, transparent, and beneficial manner than possible under the existing mass consumerist paradigm of big-ag and big-box stores.
Applying the resources found at a makerspace to local agriculture gives us the ability to take organic agriculture and increase its efficiency through automation. That’s the idea behind ProgressTH’s own automated agriculture project, and others like it. There is no reason why local communities cannot have locally produced organic food, and utilize technology to bring efficiency on par with that claimed by large-scale operations.
Modern civilization does not function without electrical power, something we are reminded of every time the power goes out during a storm. Currently, most of the world’s power still comes from centralized national grids and large power plants.
Dropping prices and increasing capabilities is making solar power an attractive means to help decentralize and localize power production.
However, the march forward of technology is finally making the means of producing power locally more accessible to more people around the world. An extreme example of a localized, distributed power grid can be found in the remote hills of Thailand’s Phetchaburi province where the national power grid never quite made it. A local team created a tech-center of sorts where villagers were trained in the designing and installation of solar power systems, bringing the village light and power for irrigation house-by-house. The villagers have created a sort of collaborative network where everyone helps out when expanding the network’s capabilities.
The Pedang Project in Phetchaburi, Thailand has literally brought power to a tiny remote village isolated from the national power grid. Now it is taking its experience and sharing it with others around the country to replicate their success.
This network also trains people from all over the country to replicate their success elsewhere, even in areas where the national grid does reach, but where independence in power production is still sought.
This includes a school halfway across the country that is entirely solar powered which has incorporated alternative energy in the curriculum giving students practical experience and skills to use once they graduate.
A school in Thailand’s northeast has also become a center for alternative energy and organic agriculture, all of which is combined with more traditional curriculum. Students grow their own food and help maintain the solar power system that powers the school during studying hours.
Imagine every community, rural or urban, developing their own alternative power solutions themselves, managing both the physical infrastructure and the knowledge required to maintain it. It doesn’t necessarily need to replace current power production, but it could augment it until technology makes it possible for complete, localized and distributed power production.
This healthcare professional is working on a prototype in a makerspace placed literally within the hospital he works at.
MIT’s MakerNurse program is one example of this. Bangkok-based QSNICH (Queen Sirikit National Institute of Child Health) is another example. Decentralizing and opening up the development of biomedical technology is key to lowering its prices. While subsidizing healthcare now is necessary to ensure people who cannot afford treatment can still get it, in the future, healthcare will be so cheap such subsidies will have less impact on the quantity and quality of care.
Biomedical technology, the hardware you see in hospitals is one thing, the actual pharmaceuticals and therapies administered to patients is another. DIYbio (do-it-yourself biology) is a growing community much like the maker movement that seeks to open up biotechnology to a wider audience by lowering the cost of equipment and opening up knowledge by making their work collaborative, transparent and, most importantly, opensource.
3D-printed prototypes developed for healthcare professionals at a Bangkok-based children’s hospital by ProgressTH’s in-house makerspace.
And, believe it or not, cutting-edge technology like gene therapy which has actually already cured cancer in terminal leukemia patients and shown promise in clinical trials for everything from heart disease to blindness and deafness is being approached by the DIYbio community. For now, it borders between something like a community lab and a small start-up company, as is the case with Bioviva or Andrew Hessel’s Pink Army Cooperative. In the future, we can see current collaborations between makerspaces and healthcare professionals extending and evolving between biotech researchers and local community labs.
Liz Parrish of Bioviva is blurring the lines between traditional R&D and accelerated and smaller-scale progress in developing therapies for patients.
Again, the makerspace allows for the prototyping and development of much of the opensource biotech equipment already being produced and making headlines around the world.
Microfactories are localized manufacturing facilities that specialize in small-run production. Say that you create a brilliant prototype at your local makerspace, but need to make only 100-200 of them at a time. Traditional factories because of current economies of scale usually will not help you, at least not for a reasonable price. Microfactories can fill the void between makerspace prototypes and mass production.
Microfactories already exist, but require large capital investments for the amount of machinery required to efficiently carry out small-run production. Advances in personal manufacturing will continue to lower these barriers, and many makerspaces around the world are already working to bridge the gap between prototyping and small-run production.
In the future, microfactories may evolve into an entire network of distributed manufacturing making mass production obsolete. This is, again, dependent on the progress of manufacturing technology. When computer-controlled manufacturing processes like CNC mills and 3D printers can handle more materials, faster, and more efficiently, small-run production will become more and more practical.
An Arduino-compatible board made in Thailand for the Thai market beats out Chinese-made boards both in quality and even price. This is part of a trend toward the gradual reduction of manufacturing “hubs” and lead toward a more distributed and local means of manufacturing.
This is just the leading edge of a shifting paradigm toward fully distributed manufacturing. Again, makerspaces will play a crucial role, providing educational and training resources for the local community to learn how to design and develop ideas into prototypes and then pass them on to local microfactories for production and distribution.
Local Motors is pioneering the concept of distributed car manufacturing. Microfactories in the future may make everything from handheld devices to something as big as a car, on demand or in small runs that will challenge or entirely shift our current globalized manufacturing paradigm.
Just how far could this go? Looking at US-based Local Motors, who is attempting to create (which much success) a distributed auto-manufacturing network, it can probably end up encompassing nearly everything we use on a daily basis short of aerospace and architecture. With 3D-printed buildings cropping up around the world, each community might have their own cooperative-owned system for that as well.
Maybe now you can see how communities possessing these key institutions could begin to tackle their problems head on, practically, with tangible solutions instead of waiting for others, far away, to address them for them. By doing so, people will become more directly involved in their own destiny, possessing both skills and experience in running and improving their communities, giving them better insight and discretion when engaging in political processes beyond their community.
And because of the talent that is attracted to and produced within makerspaces, the means of creating, for example, parallel mesh communication networks or water production and distribution systems, could exist as well. Virtually everything in one’s community could end up a product of local talent, entrepreneurial vision, and innovation.
But it is important to remind potential critics that this is not a process toward tens of thousands of isolated communities scattered across the planet. Like makerspaces today, while each one possesses its own tools and talent, they are all connected and collaborating together with other spaces around the world taking and adapting great ideas when needed, while sharing their own success with others through an opensource culture.
The distributed nature of these economic, manufacturing, healthcare, agricultural, and infrastructure networks also means more resilience, especially because they are collaborative on a much larger scale. There is no single power plant or agricultural region to “wipe out” to plunge a huge population dependent on either into crisis. Disasters and crises can be absorbed and compensated for by neighboring communities unaffected. The loss of power in one community will not affect another if both are self-sufficient in power production. However, temporary assistance would be possible for one community to lend another.
“Standards,” if you will, would still exist, honed not through legality and policy, but through actual performance data, user feedback, and reputation. And because this process by its very nature is a flexible one, unforeseen opportunities and threats could be capitalized on or met as needed.
How Can You Get Involved Today?
Yes, you can get involved today! All you have to do is find your closest makerspace (or here) and drop by to check it out. You can also begin teaching yourself by taking advantage of the huge amount of fully free resources online covering everything from the basics of 3D printing, to opensource electronics, to local organic agriculture, to DIYbio. Let your favorite Internet search engine be your guide and find the resources you find most useful to your own style of learning. On YouTube alone, by simply typing any area of interest in, you can usually find dozens of tutorials and presentations.
A makerspace in Chiang Mai, Thailand. Just a few years ago, there were no makerspaces at all in Thailand, now there are clubs and spaces from north to south and a growing community connected through collaboration and enthusiasm about the power of hands-on innovations and solutions.
Get your friends involved; and if none are interested, it is easy to make new friends who are interested in this shifting paradigm, since “collaboration” is in fact at the very heart of it. If you are in Bangkok, feel free to contact us for workshops that ProgressTH and its many friends have on offer, some of which are even free.
The most important thing to remember is, no matter how small your progress is day to day, it will all add up in a year’s time to something that will surely surprise you. The only sure way to fail is by doing nothing — after all, zero times all the days in the year still only equals zero. You do not need to be a trained engineer or professional designer, biologist, or experienced farmer to begin building up your local community. Many of the most prominent names contributing to this current paradigm are college dropouts, or entirely self-taught. You will surely run into professionals, however, and you will learn a lot from them.
It is a truly exciting journey, and one that will have direct benefit to both yourself and your community. You can do it part-time in addition to your existing job. And many have ended up making a living full-time by contributing. We have, and will continue covering this unfolding movement, and we would love to cover your contributions… so start contributing!
What happens if the facial recognition cameras get it wrong? Or the “visual microphone” detects the wrong sound? Or the emotion-reading or crime-predicting technology of the near future is just quackery, designed to frame anyone the government wants to convict? Sadly, this isn’t sci-fi fantasy; it’s the present and we’re already living through it. Just ask Steve Talley…