Pubblicato in: Commercio, Economia e Produzione Industriale, Logistica

Amazon, FaceBook e Google potrebbero essere frammentati.

Giuseppe Sandro Mela.

2017-07-22.

Soffitti__002

Da un punto di vista giuridico resta al momento davvero difficile argomentare di antitrust.

Tutte le ditte di una certa quale dimensione si articolano su numerosi stati differenti, ciascuno dei quali con il proprio corpo giuridico e le proprie regole di mercato. Esistono trattati più o meno vincolanti, ma sono sottoscritti da un numero molto limitato di nazioni e, quasi di norma, sono disattesi.

Sicuramente esiste un pressante problema fiscale. Le multinazionali possono infatti ripartire gli utili tra gli stati a loro più favorevoli, versando nella residenza fiscale ufficiale solo qualche piccolo residuo. Non esiste infatti una legislazione globale in merito e, lo si dica pure apertamente, nessuno la vorrebbe.

Gli stati di origine hanno tutti gli interessi, anche se non esplicitati, a che le loro ditte crescano al massimo possibile, costi quello che costi, mentre gli stati piccoli accordano vantaggiosi regimi fiscale purché si impianti sul loro territorio una qualche attività che generi posti di lavoro.

Il problema diventa ben più acuto, e da solo economico transita apertamente nel politico, quando queste società internazionali acquisiscono una posizione mondiale di monopolio.

E tale sembrerebbe essere l’attuale posizione di Amazon, FaceBook e Google.

Negli Stati Uniti, sede legale di queste società, è in vigore lo Sherman Antitrust Act, approvato nel 1890 e messo in atto per la prima volta nel 1911 nei confronti dell’impero Rockefeller e della Tabacco Company.

«Alphabet Inc.’s Google gets about 77 percent of U.S. search advertising revenue»

*

«Google and Facebook Inc. together control about 56 percent of the mobile ad market»

*

«Amazon takes about 70 percent of all e-book sales and 30 percent of all U.S. e-commerce»

*

«higher returns on capital haven’t resulted in increases in business investment»

*

«Instead of applying conventional antitrust theory, such as the effect of a merger on consumer prices, enforcers may need to consider alternative tools. One is to equate antitrust with privacy, not a traditional concern of the competition police. Germany’s Federal Cartel Office, for example, is examining charges that Facebook bullies users into agreeing to terms and conditions that allow the company to gather data on their web-surfing activities in ways they might not understand»

* * * * * * * *

Non reinvestire gli utili conseguiti è caratteristica tipica delle situazioni di monopolio: non essendovi concorrenza non sembrerebbe essere necessario migliorare il prodotto.

Questo parametro è a nostro parere quello più facilmente verificabile: spesso il meglio è nemico del bene, sempre nei limiti del buon senso.

Di non facile soluzione è anche il quesito di quanta quota di mercato debba avere una società per essere considerata monopolista.

Infine, a nostro modo di vedere, vi sono aspetti politici e militari di non poco peso.

Google e FaceBook governano di fatto il flusso informativo mondiale ed Amazon si sta avviando a gestire gran parte delle vendite al dettaglio, anche se sta iniziando a fornire taluni grossisti.

Dovrebbe essere inutile evidenziare quanto le intelligence siano interessate a questi settori.

Sarà davvero interessante vedere come si risolveranno queste problematiche.


Bloomberg. 2017-07-20. Should America’s Tech Giants Be Broken Up?

Apple, Amazon, Google, and Facebook may be contributing to the U.S. economy’s most persistent ailments.

*

As a former tour manager for Bob Dylan and The Band, Jonathan Taplin isn’t your typical academic. Lately, though, he’s been busy writing somber tomes about market shares, monopolies, and online platforms. His conclusion: Amazon.com, Facebook, and Google have become too big and too powerful and, if not stopped, may need to be broken up.

Crazy? Maybe not. Taplin, 70, author of Move Fast and Break Things: How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy, knows digital media, having run the Annenberg Innovation Lab at the University of Southern California. Ten years before YouTube, he founded one of the first video-on-demand streaming services. He also knows media M&A as a former Merrill Lynch investment banker in the 1980s. He says Google is as close to a monopoly as the Bell telephone system was in 1956.

He has a point, judging by market-research figures. Alphabet Inc.’s Google gets about 77 percent of U.S. search advertising revenue. Google and Facebook Inc. together control about 56 percent of the mobile ad market. Amazon takes about 70 percent of all e-book sales and 30 percent of all U.S. e-commerce. Taplin pegs Facebook’s share of mobile social media traffic, including the company’s WhatsApp, Messenger, and Instagram units, at 75 percent.

Economists have noticed these monopoly-size numbers and drawn even bigger conclusions: They see market concentration as the culprit behind some of the U.S. economy’s most persistent ailments—the decline of workers’ share of national income, the rise of inequality, the decrease in business startups, the dearth of job creation, and the fall in research and development spending.

Can Big Tech really be behind all that? Economists are starting to provide the evidence. David Autor, the MIT economics professor who famously showed the pernicious effects of free-trade deals on Midwestern communities, is one. A recent paper he co-wrote argues that prestigious technology brands, using the internet’s global reach, are able to push out rivals and become winner-take-all “superstar” companies. They’re highly profitable, and their lucky employees generally earn higher salaries to boot.

They don’t engage in the predatory behavior of yore, such as selling goods below the cost of production to steal market share and cripple competitors. After all, the services that Facebook and Google offer are free (if you don’t consider giving up your personal data and privacy rights to be a cost). However, academics have documented how these companies employ far fewer people than the largest companies of decades past while taking a disproportionate share of national profits. As they grow and occupy a bigger part of the economy, median wages stagnate and labor’s share of gross domestic product declines. Labor’s shrinking share of output is widely implicated in the broader economic growth slowdown.

Still others have shown that, as markets become more concentrated and established companies more powerful, the ability of startups to succeed declines. Since half of all new jobs spring from successful startups, this dampens job creation.

It’s no wonder the superstar companies are getting supernormal returns on capital, further adding to income inequality, writes Peter Orszag in Bloomberg View. He and Jason Furman, chairman of President Barack Obama’s Council of Economic Advisers, point out that higher returns on capital haven’t resulted in increases in business investment—yet another manifestation of monopoly power.

Some members of the Chicago School, the wellspring of modern antitrust theory, agree. In the 1970s and ’80s, a group of University of Chicago scholars upended antitrust law by arguing that the benefits of economic efficiency created by mergers outweighed any concerns over company size. The test was one of consumer welfare: Does a merger give the combined company the power to raise consumer prices, and are barriers to entry so high that new players can’t easily jump in? U.S. antitrust enforcers were swayed. From 1970 to 1999, the U.S. brought an average of 15.7 monopoly cases a year. That number has since fallen—to fewer than three a year from 2000 to 2014.

Luigi Zingales, director of the university’s Stigler Center, likes to remind people that the reason Google and Facebook were able to succeed is that the U.S. in 1998, under Bill Clinton, sued Microsoft Corp. for tying its web browser to its Windows operating system to undermine rival Netscape. A trial court decision that Microsoft should be broken up was overturned on appeal (though not the court’s finding of monopoly), and ultimately the case was settled by the George W. Bush administration. But it slowed Microsoft’s ability to dominate the internet. Zingales says today’s monopolies are yesterday’s startups, and a healthy system needs to make room for newcomers.

Market concentration has many parents. One of them is surely the so-called network effect, a key antitrust argument in the Microsoft case. That doctrine says the more people use a platform—say, the iPhone or Facebook—the more useful and dominant it becomes. The iPhone, for example, is popular in large part because of the voluminous offerings in Apple Inc.’s App Store, and the app store is popular because developers want to write programs for popular smartphones. Network effects can create what Warren Buffett calls “competitive moats.”

Problem is, the Chicago School’s focus on the impact on consumers—at least as it’s applied in the U.S.—won’t help antitrust enforcers to drain those moats. For example, because what Facebook offers is free, regulators weren’t concerned that its $22 billion acquisition of WhatsApp in 2014 might result in higher consumer prices. In fact, because WhatsApp is in a different industry, it didn’t even increase Facebook’s market share in social media.

The tech superstars insist they compete fiercely with each other and have lowered prices in many cases. They argue that their dominance is transitory because barriers to entry for would-be rivals are low. Google often says competition is “one click away.” And since consumers prefer their platforms over others’, why punish success? But when a cool innovation pops up, the superstars either acquire it or clone it. According to data compiled by Bloomberg, Alphabet, Amazon, Apple, Facebook, and Microsoft made 436 acquisitions worth $131 billion over the last decade. Antitrust cops made nary a peep.

Snap Inc.’s experience with Facebook is instructive. Since Snap rebuffed Facebook’s $3 billion offer in 2013, Facebook has knocked off one Snapchat innovation after another. That includes Snapchat Stories, which lets users upload images and video for viewing by friends for 24 hours before self-destructing. Facebook added the feature—even calling it Stories—to its Instagram, WhatsApp, and Messenger services, and most recently to the regular Facebook product. Snap’s shares now trade at around $15, below the $17 initial offering price in March. By offering advertisers the same features but with 100 times the audience, “Facebook basically killed Snapchat,” Taplin says.

Antitrust regulators have taken notice of all this, though much more so in Europe and Asia than in the U.S. The European Union’s $2.7 billion fine in late June against Google for favoring its shopping-comparison service over rivals’ is cheering Taplin and others who monitor the superstars. They ruefully note that the U.S. chose not to bring charges against Google in 2013 for the same conduct punished by the EU.

Instead of applying conventional antitrust theory, such as the effect of a merger on consumer prices, enforcers may need to consider alternative tools. One is to equate antitrust with privacy, not a traditional concern of the competition police. Germany’s Federal Cartel Office, for example, is examining charges that Facebook bullies users into agreeing to terms and conditions that allow the company to gather data on their web-surfing activities in ways they might not understand. Users who don’t agree are locked out of Facebook’s 2 billion-strong social media network.

Another avenue is to examine control over big data. Google collects web-surfing and online-purchasing data from more than a billion people. It uses that to send personalized ads, video recommendations, and search results. The monopoly control of consumer data by Facebook and Google on such a scale has raised antitrust questions in South Korea and Japan.

Taplin suggests that authorities look to 1956, when the U.S. forced Bell Labs to license its patents to all comers. The result was a deluge of innovation (semiconductors, solar cells, lasers, cell phones, computer languages, and satellites) commercialized by new companies (Fairchild Semiconductor International, Motorola, Intel, and Texas Instruments) and the formation of Silicon Valley. Why not require the tech superstars to do the same? Who knows what forces that might unleash.

Annunci
Pubblicato in: Criminalità Organizzata, Terrorismo Islamico

‘Facebook exposed identities of moderators to suspected terrorists ‘

Giuseppe Sandro Mela.

2017-06-17.

FaceBook__001

«A security lapse that affected more than 1,000 workers forced one moderator into hiding – and he still lives in constant fear for his safety»

*

«Facebook put the safety of its content moderators at risk after inadvertently exposing their personal details to suspected terrorist users of the social network»

*

«The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda»

*

«A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the terms of service»

*

«The personal details of Facebook moderators were then viewable to the remaining admins of the group»

*

«Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland»

*

I terroristi hanno la memoria di ferro e sono anche vendicativi. Anche a distanza di anni.

Si spera che alle persone interessate sia stata data una adeguata copertura.

The Guardian. 2017-06-17. Revealed: Facebook exposed identities of moderators to suspected terrorists

A security lapse that affected more than 1,000 workers forced one moderator into hiding – and he still lives in constant fear for his safety.

*

Facebook put the safety of its content moderators at risk after inadvertently exposing their personal details to suspected terrorist users of the social network, the Guardian has learned.

The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company’s moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda.

A bug in the software, discovered late last year, resulted in the personal profiles of content moderators automatically appearing as notifications in the activity log of the Facebook groups, whose administrators were removed from the platform for breaching the terms of service. The personal details of Facebook moderators were then viewable to the remaining admins of the group.

Of the 1,000 affected workers, around 40 worked in a counter-terrorism unit based at Facebook’s European headquarters in Dublin, Ireland. Six of those were assessed to be “high priority” victims of the mistake after Facebook concluded their personal profiles were likely viewed by potential terrorists.

The Guardian spoke to one of the six, who did not wish to be named out of concern for his and his family’s safety. The Iraqi-born Irish citizen, who is in his early twenties, fled Ireland and went into hiding after discovering that seven individuals associated with a suspected terrorist group he banned from Facebook – an Egypt-based group that backed Hamas and, he said, had members who were Islamic State sympathizers – had viewed his personal profile.

Facebook confirmed the security breach in a statement and said it had made technical changes to “better detect and prevent these types of issues from occurring”.

“We care deeply about keeping everyone who works for Facebook safe,” a spokesman said. “As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened.”

The moderator who went into hiding was among hundreds of “community operations analysts” contracted by global outsourcing company Cpl Recruitment. Community operations analysts are typically low-paid contractors tasked with policing Facebook for content that breaches its community standards.

Overwhelmed with fear that he could face retaliation, the moderator, who first came to Ireland as an asylum seeker when he was a child, quit his job and moved to eastern Europe for five months.

“It was getting too dangerous to stay in Dublin,” he said, explaining that his family had already experienced the horrifying impact of terrorism: his father had been kidnapped and beaten and his uncle executed in Iraq.

“The only reason we’re in Ireland was to escape terrorism and threats,” he said.

The moderator said that others within the high-risk six had their personal profiles viewed by accounts with ties to Isis, Hezbollah and the Kurdistan Workers Party. Facebook complies with the US state department’s designation of terrorist groups.

“When you come from a war zone and you have people like that knowing your family name you know that people get butchered for that,” he said. “The punishment from Isis for working in counter-terrorism is beheading. All they’d need to do is tell someone who is radical here.”

Facebook moderators like him first suspected there was a problem when they started receiving friend requests from people affiliated with the terrorist organizations they were scrutinizing.

An urgent investigation by Facebook’s security team established that personal profiles belonging to content moderators had been exposed. As soon as the leak was identified in November 2016, Facebook convened a “task force of data scientists, community operations and security investigators”, according to internal emails seen by the Guardian, and warned all the employees and contracted staff it believed were affected. The company also set-up an email address, nameleak@fb.com, to field queries from those affected.

Facebook then discovered that the personal Facebook profiles of its moderators had been automatically appearing in the activity logs of the groups they were shutting down.

Craig D’Souza, Facebook’s head of global investigations, liaised directly with some of the affected contractors, talking to the six individuals considered to be at the highest risk over video conference, email and Facebook Messenger.

In one exchange, before the Facebook investigation was complete, D’Souza sought to reassure the moderators that there was “a good chance” any suspected terrorists notified about their identity would fail to connect the dots.

“Keep in mind that when the person sees your name on the list, it was in their activity log, which contains a lot of information,” D’Souza wrote, “there is a good chance that they associate you with another admin of the group or a hacker …”

“I understand Craig,” replied the moderator who ended up fleeing Ireland, “but this is taking chances. I’m not waiting for a pipe bomb to be mailed to my address until Facebook does something about it.”

The bug in the software was not fixed for another two weeks, on 16 November 2016. By that point the glitch had been active for a month. However, the bug was also retroactively exposing the personal profiles of moderators who had censored accounts as far back as August 2016.

Facebook offered to install a home alarm monitoring system and provide transport to and from work to those in the high risk group. The company also offered counseling through Facebook’s employee assistance program, over and above counseling offered by the contractor, Cpl.

The moderator who fled Ireland was unsatisfied with the security assurances received from Facebook. In an email to D’Souza, he wrote that the high-risk six had spent weeks “in a state of panic and emergency” and that Facebook needed to do more to “address our pressing concerns for our safety and our families”.

He told the Guardian that the five months he spent in eastern Europe felt like “exile”. He kept a low profile, relying on savings to support himself. He spent his time keeping fit and liaising with his lawyer and the Dublin police, who checked up on his family while he was away. He returned to Ireland last month after running out of money, although he still lives in fear.

“I don’t have a job, I have anxiety and I’m on antidepressants,” he said. “I can’t walk anywhere without looking back.”

This month he filed a legal claim against Facebook and Cpl with the Injuries Board in Dublin. He is seeking compensation for the psychological damage caused by the leak.

Cpl did not respond to a request to comment. The statement provided by Facebook said its investigation sought to determine “exactly which names were possibly viewed and by whom, as well as an assessment of the risk to the affected person”.

The social media giant played down the threat posed to the affected moderators, but said that it contacted each of them individually “to offer support, answer their questions, and take meaningful steps to ensure their safety”.

“Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter,” the spokesman said.

Details of Facebook’s security blunder will once again put a spotlight on the grueling and controversial work carried out by an army of thousands of low-paid staff, including in countries like the Philippines and India.

The Guardian recently revealed the secret rules and guidelines Facebook uses to train moderators to police its vast network of almost two billion users, including 100 internal training manuals, spreadsheets and flowcharts.

The moderator who fled Ireland worked for a 40-strong specialist team tasked with investigating reports of terrorist activity on Facebook. He was hired because he spoke Arabic, he said.

He felt that contracted staff were not treated as equals to Facebook employees but “second-class citizens”. He was paid just €13 ($15) per hour for a role that required him to develop specialist knowledge of global terror networks and scour through often highly-disturbing material.

“You come in every morning and just look at beheadings, people getting butchered, stoned, executed,” he said.

Facebook’s policies allow users to post extremely violent images provided they don’t promote or celebrate terrorism. This means moderators may be repeatedly exposed to the same haunting pictures to determine whether the people sharing them were condemning or celebrating the depicted acts.

The moderator said that when he started, he was given just two weeks training and was required to use his personal Facebook account to log into the social media giant’s moderation system.

“They should have let us use fake profiles,” he said, adding: “They never warned us that something like this could happen.”

Facebook told the Guardian that as a result of the leak it is testing the use of administrative accounts that are not linked to personal profiles.

Moderation teams were continually scored for the accuracy and speed of their decisions, he said, as well as other factors such as their ability to stay updated training materials. If a moderator’s score dropped below 90% they would receive a formal warning.

In an attempt to boost morale among agency staff, Facebook launched a monthly award ceremony to celebrate the top quality performers. The prize was a Facebook-branded mug. “The mug that all Facebook employees get,” he noted.

Pubblicato in: Persona Umana, Psichiatria

Facebook. Un mostro dai mille tentacoli nutrito dal gregge belante.

Giuseppe Sandro Mela.

2017-05-27.

FaceBook 001

Quando Lenin espresse la sua ferma volontà di impiccare tutti i borghesi il suo entourage si domandò donde mai avrebbe potuto procurarsi quantità così smisurate di corda.

«Ce le venderanno i borghesi».

*

Di Lenin si può dire di tutto tranne che fosse uno sciocco. Conosceva a fondo la mente e la psicologia umana, e ben sapeva quanto fosse inutile sprecare energie a combattere persone che si sarebbero rovinate da sole, con le loro mani, con le loro parole. Ben difficilmente l’Nkvd svolgeva indagini attive: le era sufficiente avere una congrua rete di informatori. Non le servivano nemmeno troppi falsi testimoni.

Tranne le rare eccezioni di persone silenziose e riservate, quelle pochissime che sanno ascoltare, il restante dell’umanità prova una forza compulsiva a parlare in perenni soliloqui nei quali racconta anche quanto di più intimo e riservato accade nella propria vita.

Facebook colma la lacuna di un immane confessionale.

La persona, sola nella sua casa e non infrequentemente nel proprio ufficio, vi posta di tutto. Intavola colloqui che sono di norma discorsi tra sordi: non le interessa tanto cercare di capire ciò che l’altro voglia dire, quanto piuttosto esternare ciò che si pensa, o si crede di pensare.

Facebook è diventato in breve la succursale gratuita del lettino dello psichiatra o, meglio, di quello che la gente si crede sia il lettino dello psichiatra.

Spesso vi si urla la propria rabbia impotente, la propria delusione cocente, i propri rimpianti, il terrore di un futuro sconosciuto nell’evoluzione ma di aspetto sinistro.

*

Così la gente si da sa sola, volontariamente, come il Cristo ai farisei.

Si illude di essere protetta dall’anonimato. Si è persino arrivati al punto che dei rapinatori postassero i selfi fatti durante la rapina, come se i poliziotti fossero fessi.

Lungi da noi l’idea di fare nomi commerciali, ma sono molteplici gli ottimi software di riconoscimento facciale così come quelli di analisi lessicale e sintattica degli scritti. E se questi sono commercialmente disponibili per piccoli personale, si potrebbe facilmente immaginare cosa possa essere al lavoro sui mainframe e nelle centrali di spionaggio. Dall’analisi delle fotografie si può risalire facilmente al giro di conoscenze ed amici.

*

Così Facebook è diventato un grande auricolare del mondo.

«All of us, when we are uploading something, when we are tagging people, when we are commenting, we are basically working for Facebook»

*

«We tried to map all the inputs, the fields in which we interact with Facebook, and the outcome»

*

«We mapped likes, shares, search, update status, adding photos, friends, names, everything our devices are saying about us, all the permissions we are giving to Facebook via apps, such as phone status, wifi connection and the ability to record audio»

* * * * * * * *

Nota.

In ufficio è proibito portarsi il proprio cellulare o strumenti analoghi, esiste solo un telefono fisso e c’è un telefono a gettone per gli ospiti. I calcolatori non sono in rete. E vi si lavora benissimo.


Bbc. 2017-05-26. How Facebook’s tentacles reach further than you think.

Facebook’s collection of data makes it one of the most influential organisations in the world. Share Lab wanted to look “under the bonnet” at the tech giant’s algorithms and connections to better understand the social structure and power relations within the company.

*

A couple of years ago, Vladan Joler and his brainy friends in Belgrade began investigating the inner workings of one of the world’s most powerful corporations.

The team, which includes experts in cyber-forensic analysis and data visualisation, had already looked into what he calls “different forms of invisible infrastructures” behind Serbia’s internet service providers.

But Mr Joler and his friends, now working under a project called Share Lab, had their sights set on a bigger target.

“If Facebook were a country, it would be bigger than China,” says Mr Joler, whose day job is as a professor at Serbia’s Novi Sad University.

He reels off the familiar, but still staggering, numbers: the barely teenage Silicon Valley firm stores some 300 petabytes of data, boasts almost two billion users, and raked in almost $28bn (£22bn) in revenues in 2016 alone.

And yet, Mr Joler argues, we know next to nothing about what goes on under the bonnet – despite the fact that we, as users, are providing most of the fuel – for free.

“All of us, when we are uploading something, when we are tagging people, when we are commenting, we are basically working for Facebook,” he says.

Image copyright Share Lab

The data our interactions provide feeds the complex algorithms that power the social media site, where, as Mr Joler puts it, our behaviour is transformed into a product.

Trying to untangle that largely hidden process proved to be a mammoth task.

“We tried to map all the inputs, the fields in which we interact with Facebook, and the outcome,” he says.

“We mapped likes, shares, search, update status, adding photos, friends, names, everything our devices are saying about us, all the permissions we are giving to Facebook via apps, such as phone status, wifi connection and the ability to record audio.”

All of this research provided only a fraction of the full picture. So the team looked into Facebook’s acquisitions, and scoured its myriad patent filings.

The results were astonishing.

Visually arresting flow charts that take hours to absorb fully, but which show how the data we give Facebook is used to calculate our ethnic affinity (Facebook’s term), sexual orientation, political affiliation, social class, travel schedule and much more.

One map shows how everything – from the links we post on Facebook, to the pages we like, to our online behaviour in many other corners of cyber-space that are owned or interact with the company (Instagram, WhatsApp or sites that merely use your Facebook log-in) – could all be entering a giant algorithmic process.

And that process allows Facebook to target users with terrifying accuracy, with the ability to determine whether they like Korean food, the length of their commute to work, or their baby’s age.

Pubblicato in: Criminalità Organizzata

Elezioni francesi. Facebook ha bloccato 30,000 ‘fake accounts”.

Giuseppe Sandro Mela.

2017-04-15.

Ovra

Ricapitoliamo.

È vero ciò che Facebook dice sia vero,

ed è falso ciò che Facebook dice sia falso.

Tutto questo ad una settimana dalla prima tornata elettorale.

Devono avere una paura becca

che i francesi eleggano Mrs Le Pen.

Facebook infatti ama caldamente i suoi utenti:

«We need to give voters all the cards in their hand so they don’t vote on the basis of false rumours».

*

«It’s important for democracy. We don’t want people to vote on the basis of something they happened to read on Facebook»

*

Ricapitoliamo.

Mr Macron è un santo asceta che ha rinunciato alla sua vita di digiuni e penitenze presso la Banca Rothschild per concorrere alla presidenza nel solo bene di tutti i francesi.

Mrs Marine Le Pen è il diavolo incarnato: figuratevi che avrebbe in programma di lasciare l’Unione Europea e l’euro.

Cosa di più satanico potrebbe mai essere concepito?

Facebook?

Semplice. È fake tutto ciò che si opponga all’attuale benemerito establishment dirigenziale europeo e di quel venerabile signore che è Mr Hollande, che tanto di è prodigato per la Francia.

Facebook non può permettere che la gente comune, quella che paga le tasse e sgobba da mane a sera, sia forviata da falsa propaganda, chiaramente uscita dalla penna malefica di Mr Putin.

No!!

Non è mica censura

come quella dell’Ovra.


The Local. 2017-04-16. Facebook tackles 30,000 fake accounts in France as election looms

he US social media giant took to Facebook on Thursday to announce a range of new security improvements, created to target “deceptive material, such as false news, hoaxes, and misinformation”.

Facebook singled out France as a country where it had been implementing the new measures. 

In France, for example, these improvements have enabled us to take action against over 30,000 fake accounts,” Shabnam Shaik, a Facebook security team manager, wrote in an official blog post

“While these most recent improvements will not result in the removal of every fake account, we are dedicated to continually improving our effectiveness.”

It explained that this had been possible by automatic detection of pages that were repeatedly posting the same content or sending continuous spam messages. 

“We’ve found that a lot of false news is financially motivated, and as part of our work to promote an informed society, we have focused on making it very difficult for dishonest people to exploit our platform or profit financially from false news sites using Facebook.”

With the presidential election taking place in just nine days, many have seen it as a priority to prevent the very modern plague of fake news stories on Facebook from influencing the election. 

In late February, a group of 37 French and international media outlets, supported by Google, launched “CrossCheck”, a joint fact-checking platform aimed at detecting fake information which could affect the French presidential election.

“We need to give voters all the cards in their hand so they don’t vote on the basis of false rumours,” Clémence Lemaistre from French newspaper Les Echos, one of the partners in CrossCheck, told The Local at the time of the launch.

“It’s important for democracy. We don’t want people to vote on the basis of something they happened to read on Facebook,” she added.

While Lemaistre and those behind CrossCheck, including Agence France-Presse and French dailies Le Monde and Liberation, doubt that fake news could decide the outcome of the presidential election in which some 37 million French voters are expected to cast a ballot, they believe it could have an influence.

Pubblicato in: Criminalità Organizzata

Facebook’s filter bubble. Un nuovo strumento di potere e condizionamento.

Giuseppe Sandro Mela.

2017-02-08.

biancaneve-e-la-strega

Se si pensasse a quanto si sia rimasti stupiti nel leggere il carteggio intercorso tra il card Cusano e Papa Paolo II, verosimilmente comprenderemmo meglio il prosieguo. Il Cusano aveva fatto venire dalla Germania Conrad Schweynheym e Arnold Pannartz, due collaboratori di Gutenberg che stamparono in tiratura 275 il Donato Minore, il De Oratione, ed il De Civitate Dei: ne rimase entusiata e ne fece ampie relazioni al Santo Padre. Il Papa, uomo santo ma molto pratico, vedeva invece nella stampa tutti i potenziali pericoli, e vide lontano: la riforma di Luther sarebbe stata ben difficile senza l’ausilio di quel nuovo mezzo di comunicazione di massa.

*

L’esperienza luterana radicò per secoli nella mente che il controllo dei mezzi di stampa equivalesse al controllo dei popoli. Controllo quindi delle tipografie, dei giornalisti e di quanto fosse stato stampato.

Il primo grande scossone a simile credenza avvenne sotto il regime sovietico. Nonostante che tutti i mezzi di comunicazione fossero asserviti al potere egemone, la gente comune non concedeva loro alcun credito.

In poche parole, una propaganda battente e sostanzialmente falsa e contraddittoria aveva esitato in un effetto contrario a quello perseguito. Siamo debitori a Suslov delle prime trattazioni organiche della disinformacija, ma per fortuna dell’umanità i politici non reputarono tempo bene impiegato studiare i suoi trattai sull’argomento.

*

Negli ultimi anni abbiamo assistito al ripetersi, mutatis mutandis, di quanto accaduto nell’Unione Sovietica.

Le elezioni in Mecklenburg-Vorpommern, Sachsen-Anhalt, Rheinland-Pfalz, Baden-Württemberg e Berlin lo avevano preannunciato. Il referendum inglese, le elezioni presidenziali in Austria e l’esito del referendum in Italia lo avevano ribadito. Ma le elezioni presidenziali e politiche negli Stati Uniti lo hanno reso evidente ai ciechi.

Anche con il totale controllo liberal dei media, televisioni e giornali, questi non erano riusciti a condizionare le intenzioni di voto degli elettori.

Sicuramente la tediosa ripetitività del pensiero unico espresso per di più nella terminologia del ‘politicamente corretto’ aveva saturato la gente comune, il cumulo di menzogne inverosimili aveva tracimato, ma era anche intervenuto un fatto nuovo, del tutto fuori controllo: internet ed i social-media.

* * * * * * * *

«un calo del 41,6% della carta stampata in circolazione negli Stati Uniti dal 2005 a oggi».

«Dal 2006 le entrate pubblicitarie sono piu’ che dimezzate a quota $22,3 miliardi»

Gestire un giornale è espensivo e la maggior parte delle grandi testate chiude i bilanci in perdita. I giornalisti sono un costo elevato. Le copie non sono vendute a prezzi tali da consentirne una ampia diffusione. Le versioni digitali sono quasi invariabilmente a pagamento.

Non solo. Negli Stati Uniti vi sono circa 324 milioni di persone. Secondo il Poynter, Il Wall Street Journal tira 2.1 copie al giorno, Usa Today 1.8, New York Time 1.2, New York Daily News 600,000, Los Angeles Times 601,000. Il Washington Post non tira più di 500,000 copie.

Grosso modo, un americano su cento legge un quotidiano, e lo legge usualmente solo parzialmente.

Non solo. La lettura di un quotidiano è quasi completamente appannaggio di quanti abbiano un livello di laurea o superiore.

In pochissime parole: la carta stampata è autoreferenziale e l’americano medio non ne può essere influenzato per il semplice motivo che non la legge.

*

I dati sulla televisione sono ancor più sconfortanti. Se le soap-opera sono molto seguite, già i telegiornali hanno una scarsa audience. I dibattiti televisivi ad argomento sociale o politico sono la causa più frequente di cambio di canale: oltre il 90% degli ascoltatori sono persone che si occupano attivamente di politica. La gente comune li snobba.

Anche in questo caso il mezzo televisivo è autoreferenziale e non raggiunge che in quote minime il così detto Joe l’idraulico.

*

Mr Donald Trump ha puntato poco o punto su giornali e televisioni, al contrario Mrs Hillary Clinton ha speso un capitale in interventi su questi mezzi, con i risultati che si sono potuti constatare. I suoi messaggi non raggiungevano il target finale: l’Elettore medio.

* * * * * * * *

Internet e social media.

Mr Trump ha puntato invece sui social-media, ed ha vinto le elezioni. Ha parlato direttamente alla grande maggioranza degli Elettori.

In primo luogo, i social-media sono inespensivi. Sono ad utilizzo gratuito, sia quando si lascino commenti, sia quando si allestiscano pagine e/o gruppi.

In secondo luogo, i social-media contengono di norma post di poche righe, scritti quasi invariabilmente in linguaggio corrente, senza tante perifrasi: il loro contenuto è accessibile anche a quanti abbiano una cultura elementare. Un concetto per volta.

In terzo luogo, la diffusione.

«Facebook has over 1.5 billion active users worldwide».

Negli Stati Uniti tre persone su quattro seguono i social-media.

I twitter del Presidente Trump hanno in media 40 milioni di lettori, con picchi di 60. Una enormità rispetto ai giornali.

In campagna elettorale i lettori avevano passato i 120 milioni.

«Other opinions and related information get filtered out – a consequence of Facebook’s increasing function as the primary source of information on current events for many of its users. So they have little chance of forming well-rounded opinions»

* * * * * * * *

Il problema delle élite liberals.

«Fake news, propaganda and “disinformatzya” are changing the media landscape – in the US, Russia and Turkey and across the world. The question is how to combat them»

Chiariamo immediatamente un termine lessicologico.

Per i liberals democratici americani e per i socialisti ideologici europei si ascrivono alla “fake news”, “propaganda” ovvero “disinformatzya” ogni qualsiasi cosa contrasti la loro Weltanschauung.

Hanno quindi elaborato il concetto di “filter bubble”.

Comme d’habitude, porgono il tutto come un enorme favore fatto agli utenti dei social-media, e perché sono davvero buoni e generosi, anche agli utenti di Google.

Un software apposito analizza e memorizza tutte le scelte fatte, ossia siti visitati. Si tiene in altre parole una traccia completa e perpetua di tutto ciò che una persona abbia letto o guardato.

Il software quindi propone all’utente un insieme di scelte calibrate sul pregresso.

Sembrerebbe cosa da Santa Teresa di Calcutta, ma è in effetti una mela duplicemente avvelenata.

– Tutti sono tracciati. Questo non sembrerebbe corrispondere al comune concetto di privacy.

– Gradualmente il lettore resta intrappolato entro il ristretto circuito di siti o di pagine che visita. Gli resta così difficile poter valutare opinioni differenti. Alla fine, la sua mente resta condizionata.

* * * * * * * *

Sinceramente, sembrerebbe essere lecito il domandarsi se realmente il progresso tecnologico corrisponda anche ad un progresso umano.


Deutsche Welle. 2017-02-07. Facebook’s filter bubble. [Video]

Facebook has over 1.5 billion active users worldwide. For many of them, the social network is their primary source of information – which can result in a limited selection that only reinforces pre-existing views.

*

Most Facebook users tend to network with like-minded people. Now experts are warning this could result in what they call a filter bubble – a limiting of content to only what reinforces the user’s own pre-existing views. Other opinions and related information get filtered out – a consequence of Facebook’s increasing function as the primary source of information on current events for many of its users. So they have little chance of forming well-rounded opinions.


Deutsche Welle. 2017-02-07. Fake news is a red herring.

Fake news, propaganda and “disinformatzya” are changing the media landscape – in the US, Russia and Turkey and across the world. The question is how to combat them.

*

Watching the 2016 US presidential election was already a surreal experience, as dozens of qualified candidates lost out to a failed businessman and reality television star. But the strangeness of the election was complicated by news stories that seemed just plausible enough to be true: apapal endorsement of Donald Trump, the fiery suicide of an FBI agent investigating Hillary Clinton’s emails, Black Lives Matter as an attempt to create a race war in the US.

As you likely know, these stories aren’t true, though they did circulate widely on Facebook and other social media sites. “Fake news” and its detrimental effects on democracy has become a major theme in contemporary politics. Faced with questioning from CNN reporter Jim Acosta during his first press conference in six months, President-elect Donald Trump refused to take Acosta’s question, declaring, “You are fake news.”

Trump’s evasion referenced his anger at CNN for reporting on an intelligence dossier that suggests Russian authorities have been compiling compromising information on Trump in the hope of blackmailing him. CNN did not reproduce the dossier (online news outlet Buzzfeed did), but the president-elect was incensed that CNN would call attention to the story based on unverified documents. 

Different types of fake news

It’s tempting to say that Trump is using “fake news” to mean “news I don’t like”, but the reality is more complicated. “Fake news,” in this usage, means “real issues that don’t deserve as much attention as they’re receiving.” This form of fake news was likely an important factor in the 2016 campaign. There’s a compelling argument that the release of Clinton and Podesta’s emails by Russian hackers – and the media firestorm that ensued – were key to the outcome of the US election. While media outlets overfocused on the non-scandal of the emails, this wasn’t “fake news” so much as it was “false balance,” with newspapers playing up a Clinton “scandal” to counterbalance an endless sequence of Trump scandals.

There’s another type of “fake news” that surfaces during virtually every political campaign: propaganda. Propaganda is weaponized speech that mixes truthful, deceptive and false speech, and is designed explicitly to strengthen one side and weaken the other. Propaganda has been around for a long time, preceding the era of mass media. (Some scholars argue that the inscriptions on ancient Roman coinage should be understood as propaganda, designed to strengthen an emperor’s rule over a massive territory.) Propaganda may be an inevitable feature of electoral contests, and vicious propaganda campaigns, such as the “swiftboating” of Senator John Kerry, proved effective even before the age of social media. But tools such as Twitter and Facebook may make propaganda harder to detect and debunk. Many citizens are skeptical of claims made by politicians and parties, but are less apt to question news shared by their friends. On a medium like Facebook which gives primacy to information shared by friends, political propaganda spreads rapidly, reaching a reader from all sides, and can be difficult to distinguish from fact-based news.

A third category of “fake news,” relatively new to the scene in most countries, is disinformatzya. This is news that’s not trying to persuade you that Trump is good and Hillary bad (or vice versa). Instead, it’s trying to pollute the news ecosystem, to make it difficult or impossible to trust anything. This is a fairly common tactic in Russian politics and it’s been raised to an art form in Turkey by President Tayyip Erdogan, who uses it to discredit the internet, and Twitter in particular. Disinformatyza helps reduce trust in institutions of all sorts, leading people either to disengage with politics as a whole or to put their trust in strong leaders who promise to rise above the sound and fury. The embrace of “fake news” by the right wing in America as a way of discrediting the “mainstream media” can be understood as disinformatzya designed to reduce credibility of these institutions – with all the errors news organizations have made, why believe anything they say?

One of the best known forms of disinformatya is “shitposting,” the technique of flooding online fora with abusive content, not to persuade readers, but to frustrate anyone trying to have a reasonable discussion of politics on the internet. Disinformatzya may also explain some of the strangest phenomena of the election season, including Pizzagate, the bizarre conspiracy that led a man to “investigate” a pizza parlor with an assault rifle out of the belief – expounded and developed in thousands of online posts – that John Podesta and Hillary Clinton were trafficking children out of the basement. 

No simple answers

What can we do about news so toxic that it moves people to take up arms to investigate conspiracies? Unfortunately, the simple answers are inadequate, and some are downright counterproductive. Instead, any successful approach to fake news demands that we treat these three different diseases with different techniques.

Unbalanced news is a pre-digital problem that’s become worse in the digital age. News organizations would overfocuse election coverage on the horse race and underfocus on policy issues well before the internet. Add in an explosion of ad-driven news sites and the ability to choose what we pay attention to and you’ve got a recipe for echo chambers. Mix in algorithmic filtering, where social media platforms try to deliver us the information we most want to see, you’ve got filter bubbles. Scholarship on echo chambers and filter bubbles suggests that people who are informationally isolated become more partisan and less able to compromise, suggesting a rough road ahead for deliberative democracy.

Solving the problem of sensationalistic, click-driven journalism likely requires a new business model for news that focuses on its civic importance above profitability. In many European nations, public broadcasters provide at least a partial solution to this problem – in the US, a strong cultural suspicion of government involvement with news complicates this solution. A more promising path may be to address issues of filtering and curation. Getting Facebook to acknowledge that it’s a publisher, not a neutral platform for sharing content, and that its algorithmic decisions have an impact would be a first step towards letting users choose how ideologically isolated or exposed they want to be. Building public interest news aggregators that show us multiple points of view is a promising direction as well. Unbalanced news is a problem that’s always been with us, dealt with historically by shaping and adhering to journalistic standards – it’s now an open question whether social media platforms will take on that responsibility.

Fighting propaganda and disinformatzya

Fighting propaganda, particularly fact-free propaganda, is a tougher challenge. Many people find it infuriating to see Trump repeatedly claim that he won a landslide victory in the Electoral College when his win was one of the narrowest in history. Unfortunately, conventional fact checking does not counter propaganda very well – counter a claim and people remember the original claim, not the debunking of it. Even with debunking, the original claim remains on the internet, where motivated reasoning helps us select the claims that are consonant with our values, not with truth. 

There are two answers most often proposed for this problem and both are bad. While it seems logical to ask platforms such as Facebook to filter out fake news, it’s dangerous to give them the power to decide what speech is and is not acceptable. Furthermore, Facebook is already trying to solve the problem by asking users to flag fake news, a technique unlikely to work well, as researcher Robyn Caplan points out, because users are really bad at determining what news is fake. So perhaps the solution is to teach media literacy, so that readers become savvier about identifying and debunking propaganda. Unless of course, as social media scholar danah boyd suggests, media literacy is part of what’s gotten us into this mess. By teaching students to read news critically and search for stories from multiple sources, we may have turned them away from largely credible resources and towards whatever Google search results best fit their preconceptions of the world. 

Surprisingly, our best bets for fighting propaganda may come from a return to the past. Stanford historian Fred Turner wrote a brilliant book, “The Democratic Surround,” on how US intellectuals had tried to fight fascist propaganda in the 1940s through reinforcing democratic and pluralistic values. Rather than emphasizing critical reading or debate, the thinkers Turner documents designed massive museum installations intended to force Americans to wrestle with the plurality and diversity of their nation and the world. While exhibits such as “The Family of Man” might be an impossibly dated way to combat fake news, the idea of forcing people to confront a wider world than the one they’re used to wrestling with goes precisely to the root of the problems that enable fake news.

Even scarier than unbalanced news and propaganda is disinformatzya, for the simple reason that no one is really sure how it works. In an essay called “Hacking the Attention Economy,” Boyd suggests that the masters of disinformatzya are the denizens of online communities like 8chan and reddit, where manufacturing viral content is a form of play that’s been recently harnessed to larger political agendas. Understanding whether a phenomenon like Pizzagate is simply a strange moment in a strange election, or a masterful piece of disinformatzya designed to reduce confidence in media and other institutions, is a topic that demands both aggressive reporting and scholarly study. At this point, the task of understanding this breed of fake news has barely registered on the radar of journalists or scholars.

Fake news is a satisfying bogeyman

Harvard scholar Judith Donath suggests that combating any sort of fake news requires an understanding of why it spreads. She sees these stories as a marker of group identity: “When a story that a community believes is proved fake by outsiders, belief in it becomes an article of faith, a litmus test of one’s adherence to that community’s idiosyncratic worldview.” Once we understand these stories less as claims of truth and more as badges of affiliation, attacking them head on no longer seems as savvy. If these stories are meant less to persuade outsiders, and more to allow insiders to show their allegiance to a point of view, combating their spread as if they were infections no longer seems like a valid strategy.

I suspect that both the left and the right are overfocusing on fake news. Preliminary analysis conducted by the Media Cloud team at MIT and Harvard suggests that while fake news stories spread during the 2016 US election, they were hardly the most influential media in the dialog. In tracking 1.4 million news stories shared on Facebook from over 10,000 news sites, the most influential fake news site we found ranked 163rd in our list of most shared sources. Yes, fake news happens, but its impact and visibility comes mostly from mainstream news reporting about fake news.

Fake news is a satisfying bogeyman for people of all political persuasions, as it suggests that people disagree with us because they’ve been spoon-fed the wrong set of facts. If only we could get people to read the truth and see reality as we see it, we could have consensus and move forward! 

The truly disturbing truth is that fake news isn’t the cause of our contemporary political dysfunction. More troublingly, we live in a world where people disagree deeply and fundamentally about how to understand it, even when we share the same set of facts. Solving the problems of fake news make that world slightly easier to navigate, but they don’t scratch the surface of the deeper problems of finding common ground with people with whom we disagree.


Deutsche Welle. 2017-02-07. What goes on in a far-right Facebook filter bubble?

People tend to surround themselves with like-minded people – filter bubbles have taken that to a new level. Two German reporters were shocked when they entered the world of the far-right on Facebook via a fake account.

*

What goes on in far-right filter bubbles on Facebook?

To find out first-hand, two TV reporters for Germany’s ZDF broadcaster created a fake account – 33-year-old “Hans Mayer,” a proud German patriot with a clear penchant for right-wing topics. They encountered a world of closed groups, hatred, lies and agitation.

“Mayer,” the reporters quickly learned, was surrounded by many like-minded men and women in a filter bubble that had little to do with reality and where objections never stood a chance. A filter bubble results from a personalized search and a website’s algorithm selecting information a user might want to see, withholding information that disagrees with his or her viewpoints.

Virtual expedition

These filter bubbles are a “great threat to democracy,” ZDF reporter Florian Neuhann says. He and his colleague David Gebhard had an idea of what went on in far-right filter bubbles, Neuhann told DW, but were “totally taken aback by the speed at which their fake account accumulated Facebook friends and the utter absurdity of the stories being spread.”

People in filter bubbles focus their hatred on the same person or phenomenon – like Chancellor Angela Merkel or refugees – and they whip each other into a frenzy to outdo one another with abuse, explains Wolfgang Schweiger, a communication scientist at Hohenheim University.

On day three of the experiment, “Hans Mayer’s” timeline brimmed with fake news and lurid headlines: stories about government plans to take away the children of right-wing critics, a report stating that the city of Cologne canceled its carnival celebrations for fear of refugees, fake Merkel quotes – all shared thousands of times. The reports often followed a pattern, with an actual source hidden somewhere in the story that had dealt with the issue on hand, however remotely.

Worldwide, populists benefit from such activities; their supporters rarely challenge the “facts” they are presented.

Alarming radicalization

Humans, Schweiger says, tend to believe information passed on by those who hold the same or similar views they do.

A week into the experiment, “Mayer” had many friends on Facebook and was invited into closed groups where users openly urged resisting the system. Forget inhibitions: Interspersed between cute cat photos and pictures of weddings, posts would read “Shoot the refugees, stand them in front of a wall and take aim,” while others denied the Holocaust. No one objected.

Blind to other people’s views

By day 12, “Mayer” had 250 Facebook friends – real people who never met him in person but felt he shared their beliefs. Neuhann and Gebhard wondered what would happen if “Mayer” were to pipe up and disagree.

So they posted official statistics showing that crime rates have not risen despite the influx of hundreds of thousands of refugees into Germany. To no avail, Neuhann, says: “We were either ignored or insulted.” 

It’s a parallel world, Neuhann says. Part of the bubble is so far gone there is no way reasonable arguments can reach them, he adds, arguing that some people are only partially involved. They still have a life and maybe a job, so they might be approachable, though “perhaps not as much online as offline.”

Asked whether the reporters are afraid now that their story is out in the open, Neuhann says no, since “Hans Mayer” wasn’t the fake account’s real name.

It hasn’t been deactivated, but the journalists broke off their experiment after three weeks. The right-wing filter bubble continues to exist.