Pubblicato in: Cina, Economia e Produzione Industriale, Russia, Scienza & Tecnica

Craic. È nato il concorrente sino-russo a Boeing ed Airbus.

Giuseppe Sandro Mela.

2017-06-03.

2017-05-24__Craic__001

La notizia era nell’aria, ed in parte era stata preannunciata.

Cina. C919 ed An-225. Si sviluppa l’industria aeronautica cinese.

*

Russia e Cina hanno ambedue una consolidata esperienza nella progettazione e costruzione di aeromobili, soprattutto a scopi militari.

Negli ultimi decenni hanno sopperito alle esigenze del traffico aereo domestico ed internazionale acquisendo vettori sia dalla Boeing sia dalla Aerbus, e si potrebbe stimare che il rinnovo e l’ampliamento della flotta passeggeri e commerciale dovrebbe iniziare tra sette anni, circa.

Di questi giorni la notizia della fondazione della

«China-Russia Commercial Aircraft International Corporation Limited (CRAIC), a joint venture between Commercial Aircraft Corporation of China (COMAC) and Russia’s United Aircraft Corp (UAC), in Shanghai, China May 22, 2017».

*

«COMAC and UAC first announced the program in 2014. In November, they said they had set up a joint venture in Shanghai and unveiled a mock-up of the basic version of the jet that would have a range of up to 12,000 kilometers (7,500 miles) and seat 280 passengers»

Sembrerebbe essere del tutto ragionevole che i vettori che saranno prodotti dalla Craic sostituiranno quelli americani ed europei attualmente in uso.

Non solo.

«UAC and COMAC hold equal shares in their venture, whose jet they said would be 10-15 percent cheaper to run than planes from Boeing and Airbus»

Basso costo di esercizio, quindi, unitamente al prospettato verosimile basso, molto basso, costo di produzione: due fattori che potrebbero far diventare la Craic un temibile concorrente sia per Boeing sia per Airbus.

* * * * * * *

Ricordiamo un episodio significativo, che sia russi sia cinesi si ricordano più che bene.

«Il prototipo dell’Il-96 volò la prima volta il 28 settembre 1988 con motori turbofan sovietici. Dall’inizio di produzione di serie sono stati costruiti 24 esemplari del Ilyushin Il-96. 13 esemplari del Ilyushin Il-96 sono attualmente in servizio nelle compagnie aeree russe (Aeroflot, Aerostars Airlines, Rossija Airlines), e 3 sono in servizio all’estero nella Cubana de Aviación.

Il 14 luglio 1993 il primo Ilyushin Il-96 dell’Aeroflot ha effettuato il volo di linea internazionale sulla rotta Mosca-SVO – New York-JFK.

L’aereo è stato riprogettato e nell’aprile 1993 decollò il primo e l’unico esemplare costruito di Ilyushin Il-96 con più avanzate turboventole statunitensi Pratt & Whitney PW2337 (lo stesso tipo di turboventole montati sugli aerei da trasporto tattico McDonnell Douglas C-17 Globemaster III) ed una nuova avionica computerizzata sotto la sigla Il-96M, ma questo prototipo non entrò in serie penalizzato dalla decisione del Congresso degli Stati Uniti d’America di non permettere la collaborazione della Pratt & Whitney con la Federazione Russa.

Il 16 maggio 1997 ha effettuato il primo volo l’Ilyushin Il-96-400T con più moderni motori russi Aviadvigatel PS-90. Dal 2009 questa versione entrò nella flotta cargo della moscovita Atlant-Soyuz Airlines e dopo nella flotta della russa Air Company Polet di Voronež.» [Fonte]

*

Se è del tutto ragionevole che il Congresso degli Stati Uniti d’America di non abbia permesso la collaborazione della Pratt & Whitney con la Federazione Russa, specie poi per i motori PW2337, che avevano anche valenza militare, è altrettanto ragionevole il capire la volontà sino-russa di avere una loro propria produzione indipendente.

Non ci si stupirebbe però più di tanto che in futuro l’America rimpianga di non aver optato per una strategia collaborativa quando la concorrenza della Craig inizierà a far contrarre le vendite dei propri vettori.

Nota.

Alcuni malignassi inveterati hanno commentato sul significato di “craic” in gaelico….


Reuters. 2017-05-23. China, Russia set up wide-body jet firm in new challenge to Boeing, Airbus

China and Russia on Monday completed the formal registration of a joint venture to build a wide-body jet, kick-starting full-scale development of a program aimed at competing with market leaders Boeing Co (BA.N) and Airbus SE (AIR.PA).

State plane makers Commercial Aircraft Corp of China Ltd (COMAC) [CMAFC.UL] and Russia’s United Aircraft Corp (UAC) said at a ceremony in Shanghai the venture would aim to build a “competitive long range wide-body commercial aircraft”.

The announcement comes just weeks after COMAC successfully completed the maiden flight of its C919, China’s first home-grown narrow-body passenger jet.

COMAC President Jin Zhuanglong said the two firms had decided to hold the establishment ceremony after the C919’s flight.

“This program is aimed at fulfilling future market demand,” he told reporters. “Our two countries, our two firms … have created this joint venture to undertake responsibilities such as organization, research, management and implementation.”

The program will have a research center in Moscow and assembly line in Shanghai, he said, adding division of labor was still being discussed.

Guo Bozhi, general manager of COMAC’s wide-body department, said the venture would ask suppliers to bid for the contract to build the engine by year-end.

MAIDEN FLIGHT

COMAC and UAC first announced the program in 2014. In November, they said they had set up a joint venture in Shanghai and unveiled a mock-up of the basic version of the jet that would have a range of up to 12,000 kilometers (7,500 miles) and seat 280 passengers.

UAC President Yuri Slyusar said the firms looked to complete the maiden flight and first delivery during 2025-2028, and aimed to take 10 percent of a market dominated by the Boeing 787 and Airbus A350.

Previously, they targeted a maiden flight in 2022 and delivery in or after 2025.

UAC is also developing a version of Russian wide-body jet Ilyushin IL-96. Slyusar said the two programmes had different requirements, and that UAC would use experience with the IL-96 to aid development of the Chinese-Russian jet.

UAC and COMAC hold equal shares in their venture, whose jet they said would be 10-15 percent cheaper to run than planes from Boeing and Airbus.

Last July, Boeing forecast airlines worldwide would need 9,100 wide-body planes over 20 years through 2035, with a wave of replacement demand around 2021-2028.

Over the past decade, China has plowed billions of dollars into domestic jet development to raise its profile in global aviation and boost high-tech manufacturing.

Pubblicato in: Scienza & Tecnica

GooGoogle. Cloud Tensor Processing Unit. Una strada verso il potere.

Giuseppe Sandro Mela.

2017-05-24.

2017-05-18__Google__001

«The new chip and a cloud-based machine-learning supercomputer will help Google establish itself as an AI-focused hardware maker»

*

«If artificial intelligence is rapidly eating software, then Google may have the biggest appetite around»

*

«The announcement reflects how rapidly artificial intelligence is transforming Google itself, and it is the surest sign yet that the company plans to lead the development of every relevant aspect of software and hardware»

*

«Perhaps most importantly, for those working in machine learning at least, the new processor not only executes at blistering speed, it can also be trained incredibly efficiently»

*

«Called the Cloud Tensor Processing Unit, the chip is named after Google’s open-source TensorFlow machine-learning framework»

*

«To create an algorithm capable of recognizing hot dogs in images, for example, you would feed in thousands of examples of hot-dog images—along with not-hot-dog examples—until it learns to recognize the difference. …. But the calculations required to train a large model are so vastly complex that training might take days or weeks.»

*

«Pichai also announced the creation of machine-learning supercomputers, or Cloud TPU pods, based on clusters of Cloud TPUs wired together with high-speed data connections»

*

«These TPUs deliver a staggering 128 teraflops»

* * * * * * *

Se l’uso nel settore dell’analisi delle immagini richiama alla mente applicazioni biomediche, per esempio lo studio di immagini radiologiche, il settore potrebbe ben spaziare anche al riconoscimento e classificazione di volti e persone, oppure lo studio delle variazioni che hanno preso luogo in immagini di ricognizione satellitare.

*

Mrs Fei Fei Li, chief scientist at Google Cloud and the director of Stanford’s AI Lab, è come dice il nome cinese, nata a Pechino nel 1976. È coniugata con un italiano, Mr Silvio Savarese. Uscita dalla Princeton University con la laurea in fisica, ha conseguito il PhD in ingegneria elettronica presso il California Intitute of Technology. Ha un curriculum di tutto rispetto.

Un unico neo, ma tutto da valutare.

Dal 1999 al 2003 ha ottenuto un Paul and Daisy Soros Fellowship for New Americans.


MIT Technology Review. 2017-05-17. Google Reveals a Powerful New AI Chip and Supercomputer

The new chip and a cloud-based machine-learning supercomputer will help Google establish itself as an AI-focused hardware maker.

*

If artificial intelligence is rapidly eating software, then Google may have the biggest appetite around.

The announcement reflects how rapidly artificial intelligence is transforming Google itself, and it is the surest sign yet that the company plans to lead the development of every relevant aspect of software and hardware.

Perhaps most importantly, for those working in machine learning at least, the new processor not only executes at blistering speed, it can also be trained incredibly efficiently. Called the Cloud Tensor Processing Unit, the chip is named after Google’s open-source TensorFlow machine-learning framework.

Training is a fundamental part of machine learning. To create an algorithm capable of recognizing hot dogs in images, for example, you would feed in thousands of examples of hot-dog images—along with not-hot-dog examples—until it learns to recognize the difference. But the calculations required to train a large model are so vastly complex that training might take days or weeks.

Pichai also announced the creation of machine-learning supercomputers, or Cloud TPU pods, based on clusters of Cloud TPUs wired together with high-speed data connections. And he said Google was creating the TensorFlow Research Cloud, consisting of thousands of TPUs accessible over the Internet.

 “We are building what we think of as AI-first data centers,” Pichai said during his presentation. “Cloud TPUs are optimized for both training and inference. This lays the foundation for significant progress [in AI].”

Google will make 1,000 Cloud TPU systems available to artificial intelligence researchers willing to openly share details of their work.

Pichai also announced a number of AI research initiatives during his speech. These include an effort to develop algorithms capable of learning how to do the time-consuming work involved with fine-tuning other machine-learning algorithms. And he said Google was developing AI tools for medical image analysis, genomic analysis, and molecule discovery.

Speaking ahead of the announcements, Jeff Dean, a senior fellow at Google, said this offering might help advance AI. “Many top researchers don’t have access to as much computer power as they would like,” he noted.

Google’s move into AI-focused hardware and cloud services is driven, in part, by efforts to speed up its own operations. Google itself now uses TensorFlow to power search, speech recognition, translation, and image processing. It was also used in the Go-playing program, AlphaGo, developed by another Alphabet subsidiary, DeepMind.

But strategically, Google could help prevent another hardware company from becoming too dominant in the machine-learning space. Nvidia, a company that makes the graphics processing chips that have traditionally been used for deep learning, is becoming particularly prominent with its various products (see “Nvidia CEO: Software Is Eating the World, but AI is Going to Eat Software”).

To provide some measure of the performance acceleration offered by its cloud TPUs, Google says its own translation algorithms could be trained far more quickly using the new hardware than existing hardware. What would require a full day of training on 32 of the best GPUs can be done in an afternoon using one-eighth of one of its TPU Pods.

“These TPUs deliver a staggering 128 teraflops, and are built for just the kind of number crunching that drives machine learning today,” Fei-Fei Li, chief scientist at Google Cloud and the director of Stanford’s AI Lab, said prior to Pichai’s announcement.

A teraflop refers to a trillion “floating point” operations per second, a measure of computer performance obtained by crunching through mathematical calculations. By contrast, the iPhone 6 is capable of about 100 gigaflops, or one billion floating point operations per second.

Google says it will still be possible for researchers to design algorithms using other hardware, before porting it over to the TensorFlow Research Cloud. “This is what democratizing machine learning is all about—empowering developers by protecting freedom of design,” Li added.

A growing number of researchers have adopted TensorFlow since Google released the software in 2015. Google now boasts that it is the most widely used deep-learning framework in the world.

Pubblicato in: Criminalità Organizzata, Scienza & Tecnica

WannaCry ed il suo sodale estorsore.

Giuseppe Sandro Mela.

2017-05-17.

2017-05-17__WannaCry__001

«Should the Government Keep Stockpiling Software Bugs?»

*

«WannaCry, chiamato anche WanaCrypt0r 2.0, è un virus informatico responsabile di un’epidemia su larga scala avvenuta nel maggio 2017. Il virus, di tipologia ransomware, cripta i file presenti sul computer e chiede un riscatto di alcune centinaia di dollari per decriptarli.

Il 12 maggio 2017 il malware ha infettato i sistemi informatici di numerose aziende e organizzazioni in tutto il mondo, tra cui Portugal Telecom, Deutsche Bahn, FedEx, Telefónica, Tuenti, Renault, il National Health Service, il Ministero dell’interno russo, l’Università degli Studi di Milano-Bicocca.

Al 16 maggio sono stati colpiti più di duecentomila computer in almeno 99 paesi, rendendolo uno dei maggiori contagi informatici mai avvenuti.

WannaCry sfrutta una vulnerabilità di SMB, tramite un exploit chiamato EternalBlue e sviluppato dalla National Security Agency statunitense per attaccare sistemi informatici basati sul sistema operativo Microsoft Windows. EternalBlue era stato rubato da un gruppo di hacker che si fanno chiamare The Shadow Brokers e pubblicato in rete il 14 aprile 2017.

Il malware viene diffuso attraverso finte email e, dopo che viene installato su un computer, comincia a infettare altri sistemi presenti sulla stessa rete e quelli vulnerabili esposti a internet, che vengono infettati senza alcun intervento dell’utente. Quando infetta un computer, WannaCry cripta i file bloccandone l’accesso e aggiunge l’estensione .WCRY; impedisce inoltre il riavvio del sistema» [Fonte]


MIT Technology Review. 2017-05-17. WannaCry Has a More Lucrative Cousin That Mines Cryptocurrency for Its Masters

The same exploits that enabled WannaCry to spread globally have been in use in another malware attack since April, making far more money in the process.

*

The same exploits that allowed the WannaCry ransomware attack to spread so quickly have been used to set up an illicit cryptocurrency mining scheme. And it sure was worth it to the hackers.

Late last week, the world was hit by ransomware that locked up computers in hospitals, universities, and private firms, demanding Bitcoin in exchange for files being decrypted. It was able to spread so fast thanks to a Windows flaw weaponized by the U.S. National Security Agency known as EternalBlue, and a back door called DoublePulsar. Sadly, the tools were inadvertently lost and leaked because the NSA considered it wise to stockpile them for future use.

WannaCry was halted by swift work on behalf of dedicated security researchers. But during investigations into the attack, security firm Proofpoint has found that another piece of malware, called Adylkuzz, makes use of the same exploits to spread itself around the word’s insecure Windows devices.

This particular hack has gone unnoticed since April. That’s because unlike WannaCry, which demands attention to get money directly from a user, Adylkuzz simply installs a piece of software and then borrows a PC’s resources. It then sets about mining the little-known cryptocurrency called Monero using your computer. It does so in the background, with users potentially unaware of its presence—though perhaps a little frustrated because their computers are slower than usual.

It makes sense that EternalBlue and DoublePulsar are being used in this way, said Nolen Scaife, a security researcher at the University of Florida. The combination of exploits allows attackers to load just about any type of malware they want onto compromised machines. “It’s important to stress that it could be anything—it could be keyloggers, for example,” he told MIT Technology Review. “But what we’re seeing is that attackers are using this wherever this makes the most money.”

Interestingly, though, it’s the attack that has until now gone unnoticed that has secured the most loot. WannaCry’s attempt to extort cash in return for unlocking encrypted files has only drummed up $80,000 at the time of writing—probably because Bitcoin, the currency WannaCry’s perpetrators are demanding, is hard to use. Meanwhile one estimate suggests that the Adylkuzz attack could have already raised as much as $1 million.

In some sense, Adylkuzz is less problematic than WannaCry. It’s certainly less overtly destructive. But it does raise a more pressing cause for concern: if it’s been running since April, how many other leaked NSA tools have been used to carry out attacks that have so far gone unnoticed? Stay tuned—there may be more to come.

(Read more: Proofpoint, Reuters, “The WannaCry Ransomware Attack Could’ve Been a Lot Worse,” “Security Experts Agree: The NSA Was Hacked,” “Should the Government Keep Stockpiling Software Bugs?”)

Pubblicato in: Scienza & Tecnica

Grattacieli Elicoidali. La frontiera delle nuove strutture.

Giuseppe Sandro Mela.

2017-03-11.

 grattacieli-elicoidali-005-oko-tower-mosca

 Torre Oko in Mosca.


Le immagini delle principali e tumultuose città del mondo inizia a popolarsi di grattacieli che presentano una stranissima forma, quasi fosse candele a torciglione.

Alle volte assumono forme del tutto bizzarre, apparentemente inspiegabili.

Sono i così detti Grattacieli Elicoidali.

banca-centrale-russia-001

Banca Centrale Russa. Mosca.


Sono certamente il frutto del gusto estetico dell’architetto, ma in realtà rispondono a ben precise esigenze: ridurre in modo significativo la pressione esercitata dal vento sulla struttura, pressione utilizzata peraltro come spinta verso l’alto, ossia l’effetto von Karman.

grattacieli-elicoidali-001

Il risultato è sconcertante: il grattacielo può essere costruito con altezza maggiore utilizzando strutture meno pesante: è decisamente più stabile sulle fondamenta antisismiche e resiste anche meglio a grandi escursioni termiche.

 grattacieli-elicoidali-003

  Nota tecnica.

«Una scia vorticosa di von Kármán è una configurazione di scia caratterizzata dal distacco alternato di vortici che si verifica in alcuni corpi tozzi (corpi che presentano un distacco marcato dello strato limite).

Le scie vorticose possono essere osservate solo all’interno di un dato intervallo di numeri di Reynolds (Re). Il campo di variabilità di Re è influenzato da forma e dimensioni del corpo che causa il fenomeno, ovvero della viscosità cinematica del fluido. Il fenomeno prende nome dal fluidodinamico e ingegnere Theodore von Kármán.

In condizioni idonee per il numero di Reynolds si ha la formazione di due schiere di vortici una opposta all’altra. Quest’ultima porta il centro del vortice di una schiera ad essere corrispondente al punto medio tra due vortici della schiera opposta.

Come conseguenza della formazione di un vortice si ha una modificazione della distribuzione delle pressioni attorno al corpo. Conseguentemente, una formazione alternata di vortici genera forze variabili periodicamente e quindi una vibrazione del corpo. Qualora la frequenza di formazione dei vortici si avvicini alla frequenza naturale di vibrazione del corpo si ha il fenomeno della risonanza del medesimo. Esempi di fenomeni di questo tipo: vibrazione dei cavi telefonici; vibrazione più intensa dell’antenna dell’autoradio a date velocità; fluttuazione delle finestre avvolgibili (veneziane) quando il vento le attraversa; vibrazione dei tiranti dei ponti strallati.

In un primo momento si era pensato che il distacco dei vortici avesse provocato il crollo del ponte sul fiume Tacoma, mentre la causa del cedimento è stato un fenomeno aeroelastico noto come flutter.

Questo tipo di fenomeno deve essere preso in considerazione in fase di progettazione di strutture quali periscopi per sottomarini oppure per ciminiere industriali. Un metodo per evitarlo è inserire degli elementi di disturbo del flusso. Se questo è cilindrico, l’impiego di pinne di lunghezza maggiore del diametro consente di evitare la formazione delle scie vorticose. Poiché nel caso di edifici o antenne il vento può avere direzione qualsiasi, si impiegano elementi a profilo elicoidale simili a filetti. Questi vengono montati nella sommità delle strutture, generando un flusso asimmetrico tridimensionale che riduce la formazione alternata di vortici.»

Pubblicato in: Geopolitica Militare, Problemi militari, Scienza & Tecnica

Cina. Il suo primo supercalcolatore interamente made in China.

Giuseppe Sandro Mela.

2017-02-28.

2017-02-07__cina_calcolatore__001

La Cina sta crescendo con impensata ed impensabile velocità anche nel settore delle ricerche di punta. Questa notizia evidenzia come sia un competitore agguerrito, assolutamente indisponibile a metter fiori nelle bocche dei cannoni. Anzi, tutt’altro.

Se è vero che l’articolista del New York Times conclude dicendo che:

«The new supercomputer, like similar machines anywhere in the world, has a variety of uses, and does not by itself represent a direct military challenge. It can be used to model climate change situations, for instance, or to perform analysis of large data sets»,

È altrettanto vero che i cinesi del ‘clima‘ se ne stanno facendo un baffo a torciglione: uno di qua e l’altro di là.

Il Sunway TaihuLight è di impiego militare, al di là delle elucubrazioni fatte in Occidente.

*

«This week, China’s Sunway TaihuLight officially became the fastest supercomputer in the world. The previous champ? Also from China. What used to be an arms race for supercomputing primacy among technological nations has turned into a blowout.»

*

«The Sunway TaihuLight is indeed a monster: theoretical peak performance of 125 petaflops, 10,649,600 cores, and 1.31 petabytes of primary memory. That’s not just “big.” Former Indiana Pacers center Rik Smits is big. This is, like, mountain big. Jupiter big.»

*

«TaihuLight’s abilities are matched only by the ambition that drove its creation. Fifteen years ago, China claimed zero of the top 500 supercomputers in the world. Today, it not only has more than everyone else—including the United States—but its best machine boasts speeds five times faster than the best the US can muster. And, in a first, it achieves those speeds with purely China-made chips.»

 *

«Think of TaihuLight, then, not in terms of power but of significance. It’s loaded with it, not only for what it can do, but how it does it.» [Source]

* * *

«The earlier supercomputer, the Tianhe 2, was powered by Intel’s Xeon processors; after it came online, the United States banned further export of the chips to China, in hopes of limiting the Chinese push into supercomputing.»

*

«The speed of the Chinese technologists, compared to United States and European artificial intelligence developers, is noteworthy.»

*

Due sono i punti critici.

– Questo supercalcolatore usa interamente tecnologia cinese. In altri termini, è in grado di progettare e costruire tutti i componenti necessari, di assemblarli e di scrive un sistema operativo efficiente. È il più potente al mondo. L’embargo ordinato dall’allora Presidente Obama è solo servito ad accelerare i tempi di sinizzazione delle tecnologie.

– I cinesi sono diventati competitori mondiali nel settore dell’intelligenza artificiale.

*

Sarebbe questione di lana caprina discutere se siano meglio gli americani oppure i cinesi.

Il fatto è che i cinesi hanno progredito a sufficienza per produrre armi che utilizzano l’intelligenza artificiale.

Is China Really Building Missiles With Artificial Intelligence?

China To Use High Level of Artificial Intelligence For Missiles

AI cruise control: China wants high-level artificial intelligence for next-gen missiles

China eyes artificial intelligence for new cruise missiles

*

Usare il Sunway TaihuLight per ricerche sul ‘clima‘? Solo una mente perversa e pervertita potrebbe pensare ciò.


The New York Times. 2017-02-05. China’s Intelligent Weaponry Gets Smarter

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies.

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

Well into the 1960s, the United States held a military advantage based on technological leadership in nuclear weapons. In the 1970s, that perceived lead shifted to smart weapons, based on brand-new Silicon Valley technologies like computer chips. Now, the nation’s leaders plan on retaining that military advantage with a significant commitment to artificial intelligence and robotic weapons.

But the global technology balance of power is shifting. From the 1950s through the 1980s, the United States carefully guarded its advantage. It led the world in computer and material science technology, and it jealously hoarded its leadership with military secrecy and export controls.

In the late 1980s, the emergence of the inexpensive and universally available microchip upended the Pentagon’s ability to control technological progress. Now, rather than trickling down from military and advanced corporate laboratories, today’s new technologies increasingly come from consumer electronics firms. Put simply, the companies that make the fastest computers are the same ones that put things under our Christmas trees.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

And last year, Tencent, developer of the mobile app WeChat, a Facebook competitor, created an artificial intelligence research laboratory and began investing in United States-based A.I. companies.

Rapid Chinese progress has touched off a debate in the United States between military strategists and technologists over whether the Chinese are merely imitating advances or are engaged in independent innovation that will soon overtake the United States in the field.

“The Chinese leadership is increasingly thinking about how to ensure they are competitive in the next wave of technologies,” said Adam Segal, a specialist in emerging technologies and national security at the Council on Foreign Relations.

In August, the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Andrew Ng, chief scientist at Baidu, said the United States may be too myopic and self-confident to understand the speed of the Chinese competition.

“There are many occasions of something being simultaneously invented in China and elsewhere, or being invented first in China and then later making it overseas,” he said. “But then U.S. media reports only on the U.S. version. This leads to a misperception of those ideas having been first invented in the U.S.”

A key example of Chinese progress that goes largely unreported in the United States is Iflytek, an artificial intelligence company that has focused on speech recognition and understanding natural language. The company has won international competitions both in speech synthesis and in translation between Chinese- and English-language texts.

The company, which Chinese technologists said has a close relationship with the government for development of surveillance technology, said it is working with the Ministry of Science and Technology on a “Humanoid Answering Robot.”

“Our goal is to send the machine to attend the college entrance examination, and to be admitted by key national universities in the near future,” said Qingfeng Liu, Iflytek’s chief executive.

The speed of the Chinese technologists, compared to United States and European artificial intelligence developers, is noteworthy. Last April, Gansha Wu, then the director of Intel’s laboratory in China, left his post and began assembling a team of researchers from Intel and Google to build a self-driving car company. Last month, the company, Uisee Technology, met its goal — taking a demonstration to the International Consumer Electronics Show in Las Vegas — after just nine months of work.

“The A.I. technologies, including machine vision, sensor fusion, planning and control, on our car are completely home-brewed,” Mr. Wu said. “We wrote every line by ourselves.”

Their first vehicle is intended for controlled environments like college and corporate campuses, with the ultimate goal of designing a shared fleet of autonomous taxis.

The United States’ view of China’s advance may be starting to change. Last October, a White House report on artificial intelligence included several footnotes suggesting that China is now publishing more research than scholars here.

Still, some scientists say the quantity of academic papers does not tell us much about innovation. And there are indications that China has only recently begun to make A.I. a priority in its military systems.

“I think while China is definitely making progress in A.I. systems, it is nowhere close to matching the U.S.,” said Abhijit Singh, a former Indian military officer who is now a naval weapons analyst at the Observer Research Foundation in New Delhi.

Chinese researchers who are directly involved in artificial intelligence work in China have a very different view.

“It is indisputable that Chinese authors are a significant force in A.I., and their position has been increasing drastically in the past five years,” said Kai-Fu Lee, a Taiwanese-born artificial intelligence researcher who played a key role in establishing both Microsoft’s and Google’s China-based research laboratories.

Mr. Lee, now a venture capitalist who invests in both China and the United States, acknowledged that the United States is still the global leader but believes that the gap has drastically narrowed. His firm, Sinovation Ventures, has recently raised $675 million to invest in A.I. both in the United States and in China.

“Using a chess analogy,” he said, “we might say that grandmasters are still largely North American, but Chinese occupy increasingly greater portions of the master-level A.I. scientists.”

What is not in dispute is that the close ties between Silicon Valley and China both in terms of investment and research, and the open nature of much of the American A.I. research community, has made the most advanced technology easily available to China.

In addition to setting up research outposts such as Baidu’s Silicon Valley A.I. Laboratory, Chinese citizens, including government employees, routinely audit Stanford University artificial intelligence courses.

One Stanford professor, Richard Socher, said it was easy to spot the Chinese nationals because after the first few weeks, his students would often skip class, choosing instead to view videos of the lectures. The Chinese auditors, on the other hand, would continue to attend, taking their seats at the front of the classroom.

Artificial intelligence is only one part of the tech frontier where China is advancing rapidly.

Last year, China also brought the world’s fastest supercomputer, the Sunway TaihuLight, online, supplanting another Chinese model that had been the world’s fastest. The new supercomputer is thought to be part of a broader Chinese push to begin driving innovation, a shift from its role as a manufacturing hub for components and devices designed in the United States and elsewhere.

In a reflection of the desire to become a center of innovation, the processors in the new computer are of a native Chinese design. The earlier supercomputer, the Tianhe 2, was powered by Intel’s Xeon processors; after it came online, the United States banned further export of the chips to China, in hopes of limiting the Chinese push into supercomputing.

The new supercomputer, like similar machines anywhere in the world, has a variety of uses, and does not by itself represent a direct military challenge. It can be used to model climate change situations, for instance, or to perform analysis of large data sets.

But similar advances in high-performance computing being made by the Chinese could be used to push ahead with machine-learning research, which would have military applications, along with more typical defense functions, such as simulating nuclear weapons tests or breaking the encryption used by adversaries.

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

“No one sort of overtly says that, because the Pentagon can’t say it’s about China, and the tech companies can’t,” Mr. Singer said. “But it’s there in the background.”

Pubblicato in: Medicina e Biologia, Psichiatria, Scienza & Tecnica

Integratori vitaminici sarebbero causa efficiente di cancro.

Giuseppe Sandro Mela.

2016-12-13.

 The medieval dance of death.

Superstizione e supponenza sono caratteristiche che si associano quasi invariabilmente alla ignoranza.

Così si formano idee preconcette talmente radicate e coercitive, da terminare soltanto con la morte dell’ossesso: non le si sradicano nemmeno a cannonate.

Gli ossessi sono tetragoni ad ogni possibile forma di ragionamento.

Uno dei miti più radicati della società contemporanea è quello delle vitamine e degli antiossidanti.

La loro assunzione continuativa sarebbe quanto mai benefica, e la Fao le distribuisce infatti a piene mani, al posto del grano e del riso. Morire di fame sì, ma sani.

*

Virtualmente nessuno, tranne gli addetti ai lavori, sa cosa siano e come funzionino, ma tutti cicalecciano che la loro assunzione tutti i giorni in larghe dosi sarebbe la panacea contro tutto, ivi compresa la morte.

Ma Sorella Morte se ne fa ben poco delle loro credenze, e falcia questi consumatori cronici con rinvigorito impeto. E questi iperconsumatori sono statisticamente molto più esposti alle patologie cancerogene. Poi, come di abitudine, si riscontra una massa di caproni urlanti che contestano aspramente il dato di fatto, avversano l’evidenza. E l’aspetto ridicolo è che si dichiarano fedeli servitori della scienza.

Se ben si comprende un simile comportamento in quanti ne traggono guadagni, tranne la motivazione psichiatrica il comportamento di tutti gli altri resterebbe incomprensibile.

Siamo chiari: la morte degli ossessi è una liberazione.

*

«The incidence of lung cancer increased by 16% in the group given vitamin supplements.»

*

«In 1994, for example, one trial followed the lives of 29,133 Finish people in their 50s. All smoked, but only some were given beta-carotene supplements. Within this group, the incidence of lung cancer increased by 16%.»

*

«A similar result was found in postmenopausal women in the U.S. After 10 years of taking folic acid (a variety of B vitamin) every day their risk of breast cancer increased by 20% relative to those women who didn’t take the supplement.»

*

«It gets worse. One study of more than 1,000 heavy smokers published in 1996 had to be terminated nearly two years early. After just four years of beta-carotene and vitamin A supplementation, there was a 28% increase in lung cancer rates and a 17% increase in those who died.»

*

«A study published in 2007 from the US National Cancer Institute, for instance, found that men that took multivitamins were twice as likely to die from prostate cancer compared to those who didn’t.»

*

«And in 2011, a similar study on 35,533 healthy men found that vitamin E and selenium supplementation increased prostate cancer by 17%»

* * * * * * *

«Quando i fatti smentiscono la teoria, tanto peggio per i fatti» [Hegel]

 


Bbc. 2016-12-12. Why vitamin pills don’t work, and may be bad for you.

We dose up on antioxidants as if they are the elixir of life. At best, they are probably ineffective. At worse, they may just send you to an early grave.

*

For Linus Pauling, it all started to go wrong when he changed his breakfast routine. In 1964, at the age of 65, he started adding vitamin C to his orange juice in the morning. It was like adding sugar to Coca Cola, and he believed – wholeheartedly, sometimes vehemently  – that it was a good thing.

Before this, his breakfasts were nothing to write about. Just that they happened early every morning before going to work at California Institute of Technology, even on weekends. He was indefatigable, and his work was fruitful.

At the age of 30, for instance, he proposed a third fundamental way that atoms are held together in molecules, melding ideas from both chemistry and quantum mechanics. Twenty years later, his work into how proteins (the building blocks of all life) are structured helped Francis Crick and James Watson decode the structure of DNA (the code of said building blocks) in 1953. 

The next year, Pauling was awarded a Nobel Prize in Chemistry for his insights into how molecules are held together. As Nick Lane, a biochemist from University College London, writes in his 2001 book Oxygen, “Pauling… was a colossus of 20th Century science, whose work laid the foundations of modern chemistry.”

But then came the vitamin C days. In his 1970 bestselling book, How To Live Longer and Feel Better, Pauling argued that such supplementation could cure the common cold. He consumed 18,000 milligrams (18 grams) of the stuff per day, 50 times the recommended daily allowance.

In the book’s second edition, he added flu to the list of easy fixes. When HIV spread in the US during the 1980s, he claimed that vitamin C could cure that, too.

In 1992, his ideas were featured on the cover of Time Magazine under the headline: “The Real Power of Vitamins”. They were touted as treatments for cardiovascular diseases, cataracts, and even cancer. “Even more provocative are glimmerings that vitamins can stave off the normal ravages of ageing,” the article claimed.

Sales in multivitamins and other dietary supplements boomed, as did Pauling’s fame.

But his academic reputation went the other way. Over the years, vitamin C, and many other dietary supplements, have found little backing from scientific study. In fact, with every spoonful of supplement he added to his orange juice, Pauling was more likely harming rather than helping his body. His ideas have not just proven to be wrong, but ultimately dangerous. 

Pauling was basing his theories on the fact that vitamin C is an antioxidant, a breed of molecules that includes vitamin E, beta-carotene, and folic acid. Their benefits are thought to arise from the fact that they neutralise highly reactive molecules called free-radicals.

In 1954, Rebeca Gerschman then at the University of Rochester, New York, first identified these molecules as a possible danger – ideas expanded upon by Denham Harman, from the Donner Laboratory of Medical Physics at UC Berkeley in 1956, who argued that free radicals can lead to cellular deterioration, disease and, ultimately, ageing.

Throughout the 20th Century, scientists steadily built on his ideas and they soon became widely accepted.

Here’s how it works. The process starts with mitochondria, those tiny combustion engines that sit within our cells. Inside their internal membranes food and oxygen are converted into water, carbon dioxide, and energy. This is respiration, a mechanism that fuels all complex life.

‘Leaky watermills’

But it isn’t so simple. In addition to food and oxygen, a continuous flow of negatively charged particles called electrons is also required. Like a subcellular stream downhill powering a series of watermills, this flow is maintained across  four proteins, each embedded in the internal membrane of the mitochondria, powering the production of the end product: energy.

This reaction fuels everything we do, but it is an imperfect process. There is some leakage of electrons from three of the cellular watermills, each able to react with oxygen molecules nearby. The result is a free radical, a radically reactive molecule with a free electron. 

In order to regain stability, free radicals wreak havoc on the structures around them, ripping electrons from vital molecules such as DNA and proteins in order to balance its own charge. Although inconceivably small in scale, the production of free radicals, Harman and many others posited, would gradually take its toll on our entire bodies, causing mutations that can lead to ageing and age-related diseases such as cancer.

In short, oxygen is the breath of life, but it also holds the potential to make us old, decrepit, and then dead.

Shortly after free radicals were linked to ageing and disease, they were seen as enemies that should be purged from our bodies. In 1972, for example, Harman wrote, “Decreasing [free radicals] in an organism might be expected to result in a decreased rate of biological degradation with an accompanying increase in the years of useful, healthy life. It is hoped that [this theory] will lead to fruitful experiments directed toward increasing the healthy human lifespan.” 

He was talking about antioxidants, molecules that accept electrons from free radicals thereby diffusing the threat. And the experiments he hoped for were sown, nurtured, and replicated over the next few decades. But they bore little fruit.

In the 1970s and into the 80s, for example, many mice – our go-to laboratory animal – were prescribed a variety of supplementary antioxidants in their diet or via an injection straight into the bloodstream. Some were even genetically modified so that the genes coding for certain antioxidants were more active than non-modified lab mice. 

Although different in method, the results were the largely the same: an excess of antioxidants didn’t quell the ravages of ageing, nor stop the onset of disease.

“They never really proved that they were extending lifespan, or improving it,” says Antonio Enriquez from the Spanish National Centre for Cardiovascular Research in Madrid. “Mice don’t care for [supplements] very much.”

What about humans? Unlike our smaller mammalian kin, scientists can’t take members of society into labs and monitor their health over their lifetime, while controlling for any extraneous factors that could bias the results at the end. But what they can do is set up long-term clinical trials.

The premise is pretty simple. First, find a group of people similar in age, location, and lifestyle. Second, split them into two subgroups. One half receives the supplement you’re interested in testing, while the other receives a blank – a sugar pill, a placebo. Third, and crucially to avoid unintentional bias, no one knows who was given which until after the trial; not even those administering the treatment. 

Known as a double-blind control trial, this is the gold standard of pharmaceutical research. Since the 1970s, there have been many trials like this trying to figure out what antioxidant supplementation does for our health and survival. The results are far from heartening.

In 1994, for example, one trial followed the lives of 29,133 Finish people in their 50s. All smoked, but only some were given beta-carotene supplements. Within this group, the incidence of lung cancer increased by 16%.

A similar result was found in postmenopausal women in the U.S. After 10 years of taking folic acid (a variety of B vitamin) every day their risk of breast cancer increased by 20% relative to those women who didn’t take the supplement. 

It gets worse. One study of more than 1,000 heavy smokers published in 1996 had to be terminated nearly two years early. After just four years of beta-carotene and vitamin A supplementation, there was a 28% increase in lung cancer rates and a 17% increase in those who died.

These aren’t trivial numbers. Compared to placebo, 20 more people were dying every year when taking these two supplements. Over the four years of the trial, that equates to 80 more deaths. As the authors wrote at the time, “The present findings provide ample grounds to discourage use of supplemental beta-carotene and the combination of beta-carotene and vitamin A.”

Fatal ideas

Of course, these notable studies don’t tell the full story. There are some studies that do show benefits of taking antioxidants, especially when the population sampled doesn’t have access to a healthy diet. 

But, according a review from 2012 that noted the conclusions of 27 clinical trials assessing the efficacy of a variety of antioxidants, the weight of evidence does not fall in its favour.

Just seven studies reported that supplementation led to some sort of health benefit from antioxidant supplements, including reduced risk of coronary heart disease and pancreatic cancer. Ten studies didn’t see any benefit at all – it was as if all patients were given the sugar pill also (but, of course, they weren’t). That left another 10 studies that found many patients to be in a measurably worse state after being administered antioxidants than before, including an increased incidence of diseases such as lung and breast cancer.

“The idea that antioxidant [supplementation] is a miracle cure is completely redundant,” says Enriquez. Linus Pauling was largely unaware of the fact that his own ideas could be fatal. In 1994, before the publication of many of the large-scale clinical trials, he died of prostate cancer. Vitamin C certainly wasn’t the cure-all that he cantankerously claimed it was up until his last breath. But did it contribute to a heightened risk? 

We’ll never know for sure. But given that multiple studies have linked excess antioxidants to cancer, it certainly isn’t out of the question. A study published in 2007 from the US National Cancer Institute, for instance, found that men that took multivitamins were twice as likely to die from prostate cancer compared to those who didn’t. And in 2011, a similar study on 35,533 healthy men found that vitamin E and selenium supplementation increased prostate cancer by 17%.

Ever since Harman proposed his great theory of free radicals and ageing, the neat separation of antioxidants and free radicals (oxidants) has been deteriorating. It has aged.

Antioxidant is only a name, not a fixed definition of nature. Take vitamin C, Pauling’s preferred supplement. At the correct dose, vitamin C neutralises highly charged free radicals by accepting their free electron. It’s a molecular martyr, taking the hit upon itself to protect the cellular neighbourhood. 

But by accepting an electron, the vitamin C becomes a free radical itself, able to damage cell membranes, proteins and DNA. As the food chemist William Porter wrote in 1993, “[vitamin C] is truly a two-headed Janus, a Dr Jekyll-Mr Hyde, an oxymoron of antioxidants.”

Thankfully, in normal circumstances, the enzyme vitamin C reductase can return vitamin C’s antioxidant persona. But what if there’s so much vitamin C that it simply can’t keep up with supply? Although such simplifying of complex biochemistry is in itself problematic, the clinical trials above provide some possible outcomes.  

Divide and conquer

Antioxidants have a dark side. And, with increasing evidence that free radicals themselves are essential for our health, even their good side isn’t always helpful.

We now know that free radicals are often used as molecular messengers that send signals from one region of the cell to another. In this role, they have been shown to modulate when a cell grows, when it divides in two, and when it dies. At every stage of a cell’s life, free radicals are vital.

Without them, cells would continue to grow and divide uncontrollably. There’s a word for this: cancer.

We would also be more prone to infections from outside. When under stress from an unwanted bacterium or virus, free radicals are naturally produced in higher numbers, acting as silent klaxons to our immune system. In response, those cells at the vanguard of our immune defense – macrophages and lymphocytes – start to divide and scout out the problem. If it is a bacterium, they will engulf it like Pac-Man eating a blue ghost.

It is trapped, but it is not yet dead. To change that, free radicals are once again called into action. Inside the immune cell, they are used for what they are infamous for: to damage and to kill. The intruder is torn apart.

From start to finish, a healthy immune response depends on free radicals being there for us, within us. As geneticists Joao Pedro Magalhaes and George Church wrote in 2006: “In the same way that fire is dangerous and nonetheless humans learned how to use it, it now appears that cells evolved mechanisms to control and use [free radicals].”

Put another way, freeing ourselves of free radicals with antioxidants is not a good idea. “You would leave the body helpless against some infections,” says Enriquez.

hankfully, your body has systems in place to keep a your inner biochemistry as stable as possible. For antioxidants, this generally involves filtering any excess out of the bloodstream into urine for disposal. “They go in the toilet,” says Cleva Villanueva from Instituto Politécnico Nacional, Mexico City, in an email.

“We’re very good at balancing things out so that the affect [of supplementation] is moderate whatever you do, which we should be grateful for,” says Lane. Our bodies have been selected to balance the risk of oxygen ever since the first microbes started to breathe this toxic gas. We can’t change billions of years of evolution with a simple pill.

No one would deny that vitamin C is vital to a healthy lifestyle, as are all antioxidants, but unless you are following doctor’s orders, these supplements are rarely going to be the answer for a longer life when a healthy diet is also an option. “Administration of antioxidants is justified only when it is evident that there is a real deficiency of a specific antioxidant,” says Villanueva. “The best option is to get antioxidants from food because it contains a mixture of antioxidants that work together.”

“Diets rich in fruits and vegetables have been shown generally to be good for you,” says Lane. “Not invariably, but generally that’s agreed to be the case.” Although often attributed to antioxidants, the benefits of such a diet, he says, might also hail from a healthy balance of pro-oxidants and other compounds whose roles aren’t yet fully understood.

After decades of unlocking the baroque biochemistry of free radicals and antioxidants, hundreds of thousands of volunteers, and millions of pounds spent on clinical trials, the best conclusion that 21st Century science has to offer is also found within a child’s classroom – eat your five-a-day.

Pubblicato in: Scienza & Tecnica, Sistemi Economici

Peter Higgs. L’atipia che mette in forse 300 mld/anno di investimenti.

Giuseppe Sandro Mela.

2016-12-08

higgs-peter__nobel_prize_24_2013

Si stima che annualmente nel mondo si spendano circa 300 miliardi di euro per la ricerca. Una cifra non da poco.

Sicuramente questi investimenti hanno prodotto un numero molto elevato di progressi tecnologici più o meno rilevanti.

I volumi editoriali si sono dilatati al limite del disumano: circa 500,000 lavori scientifici in un anno nel solo settore biomedicale.

Al punto attuale è semplicemente impossibile che un ricercatore possa leggere tutti gli articoli scientifici usciti nel suo pur limitatissimo settore.

Non solo.

Si dovrebbe prendere atto che oltre il 90% dei lavori pubblicati non riceve nemmeno una citazione nel prosieguo. In parole poverissime: sono costi inutili.

In termini medi, 250 miliardi cacciati ai pesci. Ci si dovrebbe chiedere il perché.

Assistiamo da tempo alla parametrizzazione dei lavori scientifici, dall’Impact Facor in avanti. Il “valore” scientifico di un ricercatore è dato da sofisticati algoritmi. I fondi di ricerca sono anche essi erogati sulla base di algoritmi di calcolo. Anche il valore scientifico delle pubblicazioni è così espresso da un parametro algoritmico.

Cosa è successo e cosa sta succedendo?

Semplice: quando un metodo diventa un fine perde ogni valore.

Si pubblica per soddisfare il metodo, non per ottenere un risultato scientifico.

I ricercatori hanno capito molto velocemente cosa avrebbero dovuto fare se avessero voluto continuare a ricevere finanziamenti, e lo hanno fatto. Produzione numericamente alta e tutta “politicamente corretta“, unica via per accedere alle riviste internazionali e, di conseguenza, a cattedre e fondi. Si pubblica di conseguenza per ottenere il più alto possibile score determinato dagli algoritmi. Fenomeno questo che ha assunto dimensioni impressionanti nei settori biomedicali e delle scienze umane.

Il risultato è semplicemente banale. Scienziati del calibro dei Premi Nobel Einstein, Planck, Curie, de Broglie, Heisenberg, e così via non riuscirebbero oggi ad ottenere nemmeno un posto di borsista, non parliamo poi di fondi oppure di una cattedra.

Questa constatazione dovrebbe rendere evidente quanto la situazione sia diventata grottesca. Orbene: fosese solo un problema di bottega, sarebbero solo fatti dei ricercatori. Ma purtroppo tutti quei miliardi, stima peraltro ben inferiore al volume totale di spesa del settore, sono denari presi dal Contribuente. In altre parole, il tema ci interessa da vicino.


L’esame torico fa emergere due grandi elementi.

In primo luogo, i lavori scientifici che costituiscano una vera pietra miliare, una rivoluzione scientifica, tipo quelli di Einstein, si contano sulle dita di una mano. La resa in termini qualitativi è infima. La stragrande maggioranza delle pubblicazioni non riceve nemmeno una citazione: in parole poco politicamente corrette, sono sostanzialmente carta da macero. Gli addetti ai lavori dicono che fanno contesto culturale.

In secondo luogo, anche nel settore tecnologico sono stati pubblicati ben pochi lavori che abbiano introdotte tecnologie nuove tali da incidere pesantemente sulle abitudini e costumi della gente. Ci spieghiamo meglio. L’introduzione della produzione di corrente elettrica, del telefono, dell’automobile, della radio, della lavatrice, della penna biro, del calcolatori ed infine del telefono cellulare hanno segnato degli spartiacque: prima e dopo la disponibilità di tali tecnologie. Anche in questo settore la resa qualitativa in rapporto al numero delle pubblicazioni scientifiche è davvero scarsa.

*

Negli ultimi decenni è invalsa l’abitudine di far accedere ai fondi solo ricercatori che avessero un ragionevole numero di pubblicazioni su riviste internazionali, a ciascuna delle quali è stato attribuito un suo peso determinato empiricamente. Maggiore la somma di questi pesi, migliore sarebbe il ricercatore. Poi, vi sono molti altri parametri.

Lo stesso vale per le progressioni di carriera e per i relativi stipendi.

A ciò è conseguita la corsa alla pubblicazione: è diventata questione di vita o di morte.

*

Domandiamoci adesso: ma questo sistema consente davvero di far emergere il meglio, le ricerche potenzialmente significative? In via secondaria, è il sistema migliore per erogare gli oltre 300 miliardi all’anno?

Consideriamo lo strano caso del prof. Peter Higgs, Premio Nobel per la Fisica per la teorizzazione del bosone che porta il suo nome. Un lavoro scientifico di primo ordine.

Queste sono le pubblicazioni di Peter Higgs.

  • “Theoretical Determination of Electron Density in Organic Molecules” (with C A Coulson, S L Altmann and N H March) Nature 168 1039 (1951)

  • “Perturbation Method for the Calculation of Molecular Vibration Frequencies I” J. Chem. Phys. 21 1131 (1953)

  • “A Method for Computing Zero-Point Energies” J. Chem. Phys. 21 1330 (1953)

  • “Vibration Spectra of Helical Molecules” Proc. Roy. Soc. A220472 (1953)

  • “Vibrational Modifications of the Electron Density in Molecular Crystals I” Acta. Cryst. 6232 (1953)

  •  “Perturbation Method for the Calculation of Molecular Vibration Frequencies II” J. Chem. Phys. 231448 (1955)

  • “Perturbation Method for the Calculation of Molecular Vibration Frequencies III” J. Chem. Phys. 23 1450 (1955)

  • “Vibrational Modifications of the Electron Density in Molecular Crystals II” Acta. Cryst. 8 99 (1955)

  • “A Method for Calculating Thermal Vibration Amplitudes from Spectroscopic Data” Acta. Cryst. 8 619 (1955)

  • “Vacuum Expectation Values as Sums over Histories” Nuovo Cimento (10) 4 1262 (1956)

  • “On Four-Dimensional Isobaric Spin Formalisms” Nuclear Physics 4 1262 (1957)

  • “Integration of Secondary Constraints in Quantized General Relativity” Phys. Rev. Lett. 1373 (1958)

  • “Integration of Secondary Constraints in Quantized General Relativity” Phys. Rev. Lett. 3 66 (1959)

  • “Quadratic Lagrangians and General Relativity” Nuovo Cimento (10) 11816 (1959)

  • “Broken Symmetries, Massless Particles and Gauge Fields” Physics Letters 12 132 (1964)

  • “Broken Symmetries and the Masses of Gauge Bosons” Phys. Rev. Letters. 13 508 (1964)

  • “Spontaneous Symmetry Breakdown without Massless Bosons” Phys. Rev. 145 1156 (1966)

  • “Spontaneous Symmetry Breaking” two lectures at the 14th Scottish Universities Summer School in Physics (1973). Published in “Phenomenology of Particles at High Energy” R L Crawford, R Jennings (eds.) Academic Press (1974) ISBN 9780121971502

  • “Dynamical Symmetries in a Spherical Geometry I” J. Phys. A12309 (1979)

  • “Prehistory of the Higgs Boson” Comptes Rendus Physique 8 970-972 (2007)

  • “Evading the Goldstone Theorem” Rev. Mod. Phys. 86851 (2014)

  • “Evading the Goldstone Theorem” Annalen der Physik 526211 (2014)


Nel complesso ha pubblicato un numero così scarno di lavori che oggi nessuno lo prenderebbe in considerazione.

Non riceverebbe un centesimo di grant.

Non riuscirebbe nemmeno a superare il concorso per l’ultimo posto nell’ultima facoltà di fisica nel mondo. Nemmeno come apprendista borsista.

Troppo pochi lavori, somma degli indici di impatto a livello di neolaureato o dottorando.

Eppure ha preso il premio Nobel.

*

Ci ripetiamo quindi la domanda su posta: ma questo sistema consente davvero di far emergere il meglio, le ricerche potenzialmente significative?

 

Pubblicato in: Devoluzione socialismo, Scienza & Tecnica, Sistemi Economici

Svizzera. Altro ceffone sul grugno dei verdi. Referendum sul Nucleare.

Giuseppe Sandro Mela.

2016-11-27.

 pagurus_bernhardus

Anche un’Anomura della Malacostraca lo avrebbe capito da tempo.

Ma i Verdi non ci arrivano proprio. È una loro caratteristica intrinseca.

Ogni tanto propongono un referendum per cercare di sopprime ciò che ancora resta del nucleare, pensando che nel numero una volta o l’altra potrebbe anche passare.

Evenienza per loro felice perché potrebbero passare in cassa a tagliare i coupon.

*

Ma i tempi sono cambiati. Sono simpaticamente cambiati.

La loro epoca è finita e non tornerà mai più.

I loro voti sono fuori dall’arco costituzionale.

 


La Stampa. 2016-11-27. La Svizzera dice “no” all’abbandono graduale dell’energia nucleare

Il referendum promosso da Verdi è stato respinto dalla maggioranza dei cantoni.

*

È stata bocciata l’iniziativa popolare per un abbandono pianificato dell’energia nucleare: posto in votazione referendaria oggi in Svizzera, il testo promosso dai Verdi è stato infatti respinto dalla maggioranza dei cantoni. Per essere approvate, le iniziative popolari necessitano della doppia maggioranza di pareri favorevoli dei votanti e dei cantoni svizzeri. 

L’iniziativa dei Verdi svizzeri chiedeva di vietare la costruzione di nuove centrali e di limitare a 45 anni la durata d’esercizio degli impianti esistenti. Il governo, pur essendosi pronunciato a favore dell’abbandono, ha fatto campagna per il No al testo ritenendo che l’iniziativa condurrebbe ad una chiusura troppo precipitosa delle cinque centrali atomiche attive in Svizzera.


Ansa. 2016-11-27. Svizzera: bocciato stop a siti nucleari

GINEVRA, 27 NOV – E’ stata bocciata l’iniziativa popolare per un abbandono pianificato dell’energia nucleare: posto in votazione referendaria oggi in Svizzera, il testo promosso dai Verdi è stato infatti respinto dalla maggioranza dei cantoni. Per essere approvate, le iniziative popolari necessitano della doppia maggioranza di pareri favorevoli dei votanti e dei cantoni svizzeri.

Pubblicato in: Scienza & Tecnica

La scienza. Uso ed abuso. Il caso di PubPeer.

Giuseppe Sandro Mela.

2016-11-11.

 2016-11-08__la-scienza-uso-ed-abuso-__001

L’aggettivo “scientifico” enuncia che un fenomeno è ripetibile indipendentemente dall’osservatore.

Così, è scientifico enunciare che le dita di una mano umana sono cinque: chiunque può verificare in qualsiasi momento questo dato di fatto.

La constatazione del dato di fatto dipende in modo sostanziale dalla metodologia impiegata. Fenomeni valutati con metodologie differenti possono esitare in risultati differenti per natura ovvero per entità. Il metodo enzimatico oppure quello chimico forniscono risultati differenti nella determinazione della glicemia: ma se i test sono stati condotti ad arte, la differenze sono costanti e prevedibili.

Né si pensi che ciò valga esclusivamente per le scienze empiriche: vale anche per le scienze astratte. Dagli identici postulati e definizioni qualsiasi matematico che segua correttamente i criteri della dimostrazione perviene al medesimo risultato. Non solo. Se le dimostrazioni sono corrette, lo stesso teorema può essere dimostrato con metodologie differenti. Per esempio, in via geometrica oppure in via trigonometrica, per restare nelle matematiche elementari.

*

Cosa si deve intendere per constatazione, ragionamento, dimostrazione “corretta“?

È corretto ciò che è condotto secondo il principio di non-contraddizione.

Tale principio enuncia la falsità di ogni proposizione implicante che una certa proposizione A e la sua negazione, cioè la proposizione non-A, siano entrambe vere allo stesso tempo e nello stesso modo. Una proposizione è vera oppure è falsa.

Alterum non datur, da cui discende ex falso quolibet.

Esistono situazioni rare, ma riscontrabili, nelle quali una affermazione e la sua negazione non possono essere ambedue simultaneamente completamente vere: ma il fatto di non poter categorizzare il vero e falso non inficia il principio di non-contraddizione. Stesso ragionamento può essere condotto per le logiche di Łukasiewicz oppure per quelle polivalenti di Gödel.

*

Ampliamo allora il discorso.

Le matematiche sono una grandiosa costruzione, quasi tempio del principio di non-contraddizione: tutte le dimostrazioni si fondano su tale principio.

Alcuni constatazioni hanno dello stupefacente.

La prima è che la quasi totalità dei fenomeni naturali è esprimibile e modellabile in termini matematici. Per restare nei campi elementari, si pensi ai principi della dinamica o della termodinamica, oppure alla legge dei gravi. Ma se la natura è descrivibile in termini matematici, ne consegue che anche essa è implementata secondo il principio di non-contraddizione.

La seconda è più stupefacente ancora. Spesso i matematici hanno studiato per mera curiosità e soddisfazione dell’intelletto delle matematiche totalmente astratte. Bene, quasi invariabilmente, dopo anche lunghi lassi di tempo, si constatava come queste matematiche potessero descrivere in modo proprio nuove categorie di fenomeni empirici.

Le teorie di Galois sonnecchiarono per più di un secolo prima che Heisenberg le utilizzasse, reinventandole. A tal proposito, per meglio comprendere quanto detto, potrebbe essere utile rileggersi il lavoro di Banaszak & Al, Galois Symmetries of Bethe Parameters for the Heisenberg Pentagon, Reports on Mathematical Physics, 71: 2205-215, 2013.

* * * * * * *

Si voglia o meno, le scienze progrediscono imparando dai propri errori. Se questo succede nelle scienze astratte, a magior ragione lo si riscontra in quelle empirche.

Molto saggiamente Einstein sosteneva che un buon ricercatore passa un’ora del suo tempo al giorno a costruire il suo edificio logico, e tutte le altre ventitre a cercare con cura eventuali contraddizioni nelle quali fosse incorso.

Se per un matematico nessuna dimostrazione sarà mai sufficientemente rigorosa e verificata, così per lo sperimentale i suoi test non saranno mai sufficientemente robusti e ripetibili.

L’errore metodologico è sempre in agguato e spesso riesce a mimetizzarsi con somma abilità.

*

In realtà, lo scienziato, quello vero, è più interessato al metodo che ai risultati, è più concentrato sulla ricerca degli errori che sulla eventuale grandiosità dei risultati.

Ma siamo tutti esseri umani.

Spesso ci si innamora dei risultati ottenuti, e li si guarda con lo sguardo amoroso dell’amante piuttosto che con il doveroso distacco professionale. Molto spesso, più spesso di quanto non si possa credere, anche se in buona fede, ci si concentra più sulla bellezza od eventuale utilità dei risultati ottenuti che non sulle sbavature riscontrate.

È il famoso problema del punto anomalo.

Sicuramente nella grande maggioranza dei casi è un puro e semplice errore, ma spesso è l’evidenza empirica di un nuovo e ben più complesso fenomeno. Si pensi solo a quanto abbia influito lo strano comportamento di Mercurio sulla formulazione della teoria della relatività.

*

In un clima di reciproca correttezza, il mondo scientifico è adsueto a fare ed a sentirsi porre delle critiche, talora anche feroci. La revisione critica è segno di intelligenza ed anche di normalità psichica. Ma anche una critica serrata dovrebbe essere fatta con garbo, educazione e competenza. E il primo segno di buona educazione consiste nel presentarsi. Chi si vergogna del proprio nome a maggior ragione dovrebbe vergognarsi dei propri argomenti.

Ma le critiche, per distruttive che siano, devono anch’esse rispondere ai criteri scientifici.

Le critiche ai risultati sono raramente produttive: le uniche davvero utili sono quelle che scovano una qualche contraddizione nel metodo. Le prime sono sostanzialmente inutili, e fonte di sperpero di tempo utile.

Le critiche anonime trovano una loro corretta applicazione solo nel momento in cui un lavoro è sottoposto per la pubblicazione su di una rivista scientifica. L’editore ne invia copia a diversi esperti, referee, che ne danno un giudizio anonimo, per gli intuibili motivi umani. L’editore si fa garante della serietà dei referee. E di frequente i referee riescono a migliorare in modo sostanziale il lavoro loro sottoposto.

Esclusa questa condizione davvero molto particolare, le critiche anonime quali quelle tollerate sul web sarebbero da condannarsi, e senza possibilità di appello.

Intanto, nella grande maggioranza dei casi più che critiche sono delle calunnie, infondate nella sostanza e molto male espresse nella forma: gli insulti non concorrono a far scienza. Ed invero è molto difficile trovare critiche anonime che non siano anche insultanti.

Nel caso del PubPeer, anche ad una lettura trasandata salta immediatamente agli occhi la quasi totale assenza dell’uso del tempo condizionale. Non parliamo poi del congiuntivo, il cui uso dovrebbe essere ripristinato per legge. Facciamoci caso. Più un ricercatore è serio ed attendibile e più usa il congiuntivo, il condizionale ed i verbi ausiliari di potenzialità. L’indicativo presente è impiegato esclusivamente per quelle pochissime cose che siano perfettamente auto evidenti.

Due più due fa quattro: qui l’indicativo presente è usato in modo proprio.

Quasi invariabilmente le critiche anonime usano un tranchant indicativo presente, quasi che l’autore della critica possedesse la scienza infusa e si sentisse il padreterno della scienza. Ma lo scienziato serio invia una lettera agli autori e all’editore, firmandosi con tanto di cognome, nome ed istituzione: cerca soltano di capire meglio i fenomeni, non ricerca facile popolarità né enta di sminuire gli autori.

Purtroppo vi sono molte persone che girano con lo scolapasta in testa, e senza nemmeno accorgersene.


The Economist. 2016-11-05. The watchers on the Web

A court case may define the limits of anonymous scientific criticism.

*

MANY scientific studies are flawed. Often, the reason is poor methodology. Sometimes, it is outright fraud. The conventional means of correction—a letter to the journal concerned—can take months. But there is now an alternative. PubPeer is a website that lets people comment anonymously on research papers and so, in theory, helps purge the scientific literature of erroneous findings more speedily.

Since its launch in 2012, PubPeer has alerted scientists to mistakes and image manipulation in papers, and exposed cases of misconduct. But it has also attracted criticism, not least from journal editors, some of whom argue anonymity’s cloak lets vendettas flourish unchecked. Now the site is embroiled in a court case that tests the limits of free speech under America’s First Amendment, and may define what it is permissible for researchers to say online and anonymously about science.

The proceedings centre on discussions that began on the site in November 2013. These highlighted apparent similarities between images showing the results of different experiments in papers by Fazlul Sarkar, a cancer researcher who was then based at Wayne State University in Detroit. Dr Sarkar alleges that certain commenters insinuated he was guilty of scientific fraud. The comments, he says, together with anonymous e-mails sent to the University of Mississippi, cost him the offer of a professorship there. In October 2014 he sued the commenters for defamation and subpoenaed PubPeer to disclose their identities. A court is now expected to decide whether the site will be forced to do so.

The American Civil Liberties Union has taken on the case on PubPeer’s behalf. Its lawyer, Alex Abdo, says that the anonymity of PubPeer’s commenters is protected by American law unless Dr Sarkar can provide evidence that their statements are false and have damaged his reputation. Evidence filed by PubPeer from John Krueger, an image-analysis expert, states the images in question “did not depict different experiments as they purported to” or contained other “irregularities”, and may have been manipulated. Mr Abdo asserts that the comments identified by Dr Sarkar are not defamatory. Therefore PubPeer should not be forced to disclose the commenters’ identities.

Who blows the whistle?

By contrast, Dr Sarkar’s lawyer, Nick Roumel, argues the law should not provide anonymous commenters with more protection than it gives those who post under their real names. It is impossible to contact PubPeer’s commenters to establish what they know about the allegations without knowing their identities, he says.

In March 2015 a judge at the Wayne County Circuit Court agreed that PubPeer need not disclose the identities of any of its commenters except for one. That commenter had confirmed on the site that he or she had notified Wayne State University of problems with Dr Sarkar’s papers. A prolific pseudonymous whistle-blower named Clare Francis is known to have e-mailed Wayne State in November 2013, to notify it of concerns with Dr Sarkar’s work aired on PubPeer, adding in her e-mail (if, indeed, “Clare Francis” is a woman) that, in some cases, they amounted to “what many think of as scientific misconduct.” Whether Clare Francis and the subject of the judge’s order are the same is not clear.

Both sides lodged appeals against the ruling. PubPeer objects to revealing the identity of the last commenter. Mr Roumel wants to know the identities of them all.

Two goliaths of information technology, Google and Twitter, lodged a brief in support of PubPeer in January 2016. So did two giants of science: Harold Varmus, a Nobel prize-winning cancer researcher, and Bruce Alberts, a former president of the National Academy of Sciences. They argued that the First Amendment protects “unfettered scientific discourse”.

On October 19th the Scientist, a magazine, published some findings of a misconduct investigation carried out by Wayne State University in 2015. The report of this investigation, which the magazine obtained under America’s Freedom of Information Act, states that Dr Sarkar “engaged in and permitted (and tacitly encouraged) intentional and knowing fabrication, falsification, and/or plagiarism of data”. Furthermore, 18 papers from Dr Sarkar’s laboratory have been retracted from five different journals.

Dr Sarkar rejects all the investigation’s findings. He states that he provided the correct images to the university but his explanations of how the errors occurred were dismissed out of hand. Despite his having more than 500 peer-reviewed papers to his name, his reputation has been destroyed because of “minor errors in a few articles,” he says. Philip Cunningham, who convened the Wayne State panel that investigated Dr Sarkar, says all evidence was carefully considered and the university stands by the integrity and accuracy of the report.

Normally, neither Dr Sarkar’s retractions nor Wayne State University’s report would have any bearing on the case because appeals can only consider evidence presented during an earlier trial. But on October 28th, in what may be a decisive ruling, the court allowed PubPeer to enter the Scientist’s story about the report into the official record of the case. The results of the appeal hearing itself, which took place on October 4th, are expected imminently.

Whichever way that decision goes, at least one side is likely to appeal against it. But however the case eventually ends, its outcome will affect the process of “open peer review” that PubPeer is pioneering by determining whether or not anonymous critics of scientific papers can, in the last analysis, retain their anonymity.

Pubblicato in: Scienza & Tecnica

Piume di uccello di 99 milioni di anni trovati nell’ambra.

Giuseppe Sandro Mela.

2016-10-27.

 2016-06-29__Ambra__001

Non è infrequente trovare nei reperti fossili ai ambra piccoli insetti od altre forme biologiche.

L’involucro pietrificato ce li riporta così come erano all’epoca: usualmente, dai cinquanta ai centoventi milioni di anni fa.

Di recente la scoperta di un pezzo d’ambra di notevoli dimensioni, che racchiude le piume di un uccello vissuto attorno ai cento milioni di anni fa.

2016-06-29__Ambra__002

«Two wings from birds that lived alongside the dinosaurs have been found preserved in amber»

*

«The “spectacular” finds from Myanmar are from baby birds that got trapped in the sticky sap of a tropical forest 99 million years ago»

*

«Exquisite detail has been preserved in the feathers, including traces of colour in spots and stripes»

*

2016-06-29__Ambra__003png

Cosa è l’ambra?

«Ambra (in greco antico ἤλεκτρον, elektron) è un termine usato in passato come sinonimo di resina fossile e di resinite e questa ambiguità è stata fonte di fraintendimenti e confusione. In particolare, nella letteratura europea antica, il termine ambra è stato usato in senso molto restrittivo per identificare la “succinite”, la varietà di ambra baltica più importante dal punto di vista gemmologico, ed ancora oggi questa accezione è molto comune, probabilmente per l’importanza commerciale che questa varietà di ambra ha rivestito nella storia europea.

Nella comunità scientifica oggi, per ambra si intende una qualsiasi resina fossile, e le sue varietà vengono identificate secondo la provenienza geografica.

L’ambra è emessa dalle conifere sotto forma di resina, che successivamente con il tempo si fossilizza ed in alcuni casi si solidifica conservando resti vegetali, fungini o animali tra cui artropodi ma anche, molto più raramente, vertebrati. Essa è traslucida, di un colore che può variare dal giallo al rossiccio al bruno fino ad arrivare al verde. Può contenere insetti che rimasero imprigionati al momento della sua formazione. Attualmente si raccoglie comunemente in Polonia, Lituania, Lettonia, Russia, Danimarca, Germania e Svezia. La sua lavorazione è molto diffusa nei paesi che si affacciano al mar Baltico quali: Polonia, Lituania, Lettonia, Germania e Svezia. L’ambra viene impiegata nella produzione di impugnature di bastoni, collane, orecchini, braccialetti, anelli, bocchini per sigarette e cannelli di pipe.

L’ambra fossile è stata ritrovata anche in sedimenti di età carbonifera, periodo geologico antecedente la comparsa delle angiosperme. Questa ambra presenta caratteristiche chimiche simili alle ambre più recenti, indicando che i meccanismi biologici in grado di produrre queste resine erano già presenti antecedentemente l’evoluzione delle angiosperme.» [Fonte].

*

Nota.

La terra ha un’età stimata di circa 4.5 miliardi di anni.

Orbene, se 100 milioni di anni or sono gli uccelli avevano già le piume formate come oggi, la teoria del’evoluzione è una pura e semplice bufala: semplicemente non avrebbe avuto il tempo di svolgersi. Non ditelo a Soros se no si irrita. E non ditelo nemmeno a quei lumaconi che ancora ci credono senza nemmeno essere stipendiati da Soros.

 

2016-06-29__Ambra__004png

 

Bbc. 2016-06-29. Ancient birds’ wings preserved in amber

Two wings from birds that lived alongside the dinosaurs have been found preserved in amber.

The “spectacular” finds from Myanmar are from baby birds that got trapped in the sticky sap of a tropical forest 99 million years ago.

Exquisite detail has been preserved in the feathers, including traces of colour in spots and stripes.

The wings had sharp little claws, allowing the juvenile birds to clamber about in the trees.

The tiny fossils, which are between two and three centimetres long, could shed further light on the evolution of birds from their dinosaur ancestors.

The specimens, from well-known amber deposits in north-east Myanmar (also known as Burma), are described in the journal Nature Communications.

Co-author Prof Mike Benton, from the University of Bristol, said: “The individual feathers show every filament and whisker, whether they are flight feathers or down feathers, and there are even traces of colour – spots and stripes.”

The hand anatomy shows the wings come from enantiornithine birds, which comprised a major bird grouping in the Cretaceous Period. However, the enantiornithines died out at the same time as the dinosaurs, 66 million years ago.

Dr Steve Brusatte, a vertebrate palaeontologist at Edinburgh University, described the fossils as “spectacular”.

He told BBC News: ” They’re fantastic – who would have ever thought that 99-million-year-old wings could be trapped in amber?

“These are showcase specimens and some of the most surprising fossils I’ve seen in a long time. We’ve known for a few decades that many dinosaurs had feathers, but most of our fossils are impressions of feathers on crushed limestone slabs.

“Three dimensional preservation in amber provides a whole new perspective and these fossils make it clear that very primitive birds living alongside the dinosaurs had wings and feather arrangements very similar to today’s birds.”

The international team of researchers used advanced X-ray scanning techniques to examine the structure and arrangement of the bones and feathers.

Claw marks in the amber suggest the birds were still alive when they were engulfed by the sticky sap.

Dr Xing Lida, the study’s lead author, explained: “The fact that the tiny birds were clambering about in the trees suggests that they had advanced development, meaning they were ready for action as soon as they hatched.

“These birds did not hang about in the nest waiting to be fed, but set off looking for food, and sadly died perhaps because of their small size and lack of experience.

“Isolated feathers in other amber samples show that adult birds might have avoided the sticky sap, or pulled themselves free.”