Archive for the ‘Alphago’ Category
The Turing Test is Dead. Long Live The Lovelace Test – Walter Bradley Center for Natural and Artificial Intelligence
Posted: April 8, 2020 at 4:46 am
thought-catalog-505eectW54k-unsplash Photo by Thought Catalog on Unsplash Robert J. Marks April 2, 2020 Robert J. Marks April 2, 2020
The Turing test, developed by Alan Turing in 1950, is a test of a machines ability to exhibit intelligent behaviour indistinguishable from a human. Many think that Turings proposal for intelligence, especially creativity, has been proven inadequate. Is the Lovelace test a better alternative? Robert J. Marks and Dr. Selmer Bringsjord discuss the Turing test, the Lovelace test, and machine creativity.
Mind Matters features original news and analysis at the intersection of artificial and natural intelligence. Through articles and podcasts, it explores issues, challenges, and controversies relating to human and artificial intelligence from a perspective that values the unique capabilities of human beings. Mind Matters is published by the Walter BradleyCenter for Natural and Artificial Intelligence.
See the rest here:
The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other – JD Supra
Posted: March 14, 2020 at 1:41 pm
The terms revolution and disruption in the context of technological innovation are probably bandied about a bit more liberally than they should. Technological revolution and disruption imply upheaval and systemic reevaluations of the way that humans interact with industry and even each other. Actual technological advancement, however, moves at a much slower pace and tends to augment our current processes rather than to outright displace them. Oftentimes, we fail to realize the ubiquity of legacy systems in our everyday lives sometimes to our own detriment.
Consider the keyboard. The QWERTY layout of keys is standard for English keyboards across the world. Even though the layout remains a mainstay of modern office setups, its origins trace back to the mass popularization of a typewriter manufactured and sold by E. Remington & Sons in 1874.[1] Urban legend has it that the layout was designed to slow down typists from jamming typing mechanisms, yet the reality reveals otherwise the layout was actually designed to assist those transcribing messages from Morse code.[2] Once typists took to the format, the keyboard, as we know it today, was embraced as a global standard even as the use of Morse code declined.[3] Like QWERTY, our familiarity and comfort with legacy systems has contributed to their rise. These systems are varied in their scope, and they touch everything: healthcare, supply chains, our financial systems and even the way we interact at a human level. However, their use and value may be tested sooner than we realize.
Artificial intelligence (AI) and blockchain technology (blockchain) are two novel innovations that offer the opportunity for us to move beyond our legacy systems and streamline enterprise management and compliance in ways previously unimaginable. However, their potential is often clouded by their buzzword status, with bad actors taking advantage of the hype. When one cuts through the haze, it becomes clear that these two technologies hold significant transformative potential. While these new innovations can certainly function on their own, AI and blockchain also complement one another in such ways that their combination offers business solutions, not only the ability to build upon legacy enterprise systems but also the power to eventually upend them in favor of next level solutions. Getting to that point, however, takes time and is not without cost. While humans are generally quick to embrace technological change, our regulatory frameworks take longer to adapt. The need to address this constraint is pressing real market solutions for these technologies have started to come online, while regulatory opaqueness hurdles abound. As innovators seek to exploit the convergence of AI and blockchain innovations, they must pay careful attention to overcome both technical and regulatory hurdles that accompany them. Do so successfully, and the rewards promise to be bountiful.
First, a bit of taxonomy is in order.
AI in a Nutshell:
Artificial Intelligence is the capability of machine to imitate intelligent human behavior, such as learning, understanding language, solving problems, planning and identifying objects.[4] More practically speaking, however, todays AI is actually mostly limited to if X, then Y varieties of simple tasks. It is through supervised learning that AI is trained, and this process requires an enormous amount of data. For example, IBMs question-answering supercomputer Watson was able to beat Jeopardy! champions Brad Rutter and Ken Jennings in 2011, because Watson had been coded to understand simple questions by being fed countless iterations and had access to vast knowledge in the form of digital data Likewise, Google DeepMinds AlphaGo defeated the Go champion Lee Sedol in 2016, since AlphaGo had undergone countless instances of Go scenarios and collected them as data. As such, most implementations of AI involve simple tasks, assuming that relevant information is readily accessible. In light of this, Andrew Ng, the Stanford roboticist, noted that, [i]f a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.[5]
Moreover, a significant portion of AI currently in use or being developed is based on machine learning. Machine learning is a method by which AI adapts its algorithms and models based on exposure to new data thereby allowing AI to learn without being programmed to perform specific tasks. Developing high performance machine learning-based AI, therefore, requires substantial amounts of data. Data high in both quality and quantity will lead to better AI, since an AI instance can indiscriminately accept all data provided to it, and can refine and improve its algorithms to the extent of the provided data. For example, AI that visually distinguishes Labradors from other breeds of dogs will become better at its job the more it is exposed to clear and accurate pictures of Labradors.
It is in these data amalgamations that AI does its job best. Scanning and analyzing vast subsets of data is something that a computer can do very rapidly as compared to a human. However, AI is not perfect, and many of the pitfalls that AI is prone to are often the result of the difficulty in conveying how humans process information in contrast to machines. One example of this phenomenon that has dogged the technology has been AIs penchant for hallucinations. An AI algorithm hallucinates when the input is interpreted by the machine into something that seems implausible to a human looking at the same thing.[6] Case in point, AI has interpreted an image of a turtle as that of a gun or a rifle as a helicopter.[7] This occurs because machines are hypersensitive to, and interpret, the tiniest of pixel patterns that we humans do not process. Because of the complexity of this analysis, developers are only now beginning to understand such AI phenomena.
When one moves beyond pictures of guns and turtles, however, AIs shortfalls can become much less innocuous. AI learning is based on inputted data, yet much of this data reflects the inherent shortfalls and behaviors of everyday individuals. As such, without proper correction for bias and other human assumptions, AI can, for example, perpetuate racial stereotypes and racial profiling.[8] Therefore, proper care for what goes into the system and who gets access to the outputs must be employed for the ethical employment of AI, but therein lies an additional problem who has access to enough data to really take full advantage of and develop robust AI?
Not surprisingly, because large companies are better able to collect and manage increasingly larger amounts of data than individuals or smaller entities, such companies have remained better positioned in developing complex AI. In response to this tilted landscape, various private and public organizations, including the U.S. Department of Justices Bureau of Justice, Google Scholar and the International Monetary Fund, have launched open source initiatives to make publicly available vast amounts of data that such organizations have collected over many years.
Blockchain in a Nutshell:
Blockchain technology as we know it today came onto the scene in late 2009 with the rise of Bitcoin, perhaps the most famous application of the technology. Fundamentally, blockchain is a data structure that makes it possible to create a tamper-proof, distributed, peer-to-peer system of ledgers containing immutable, time-stamped and cryptographically connected blocks of data. In practice, this means that data can be written only once onto a ledger, which is then read-only for every user. However, many of the most utilized blockchain protocols, for example, the Bitcoin or Ethereum networks, maintain and update their distributed ledgers in a decentralized manner, which stands in contrast to traditional networks reliant on a trusted, centralized data repository.[9] In structuring the network in this way, these blockchain mechanisms function to remove the need for a trusted third party to handle and store transaction data. Instead, data are distributed so that every user has access to the same information at the same time. In order to update a ledgers distributed information, the network employs pre-defined consensus mechanisms and militarygrade cryptography to prevent malicious actors from going back and retroactively editing or tampering with previously recorded information. In most cases, networks are open source, maintained by a dedicated community and made accessible to any connected device that can validate transactions on a ledger, which is referred to as a node.
Nevertheless, the decentralizing feature of blockchain comes with significant resource and processing drawbacks. Many blockchain-enabled platforms run very slowly and have interoperability and scalability problems. Moreover, these networks use massive amounts of energy. For example, the Bitcoin network requires the expenditure of about 50 terawatt hours per year equivalent to the energy needs of the entire country of Singapore.[10] To ameliorate these problems, several market participants have developed enterprise blockchains with permissioned networks. While many of them may be open source, the networks are led by known entities that determine who may verify transactions on that blockchain, and, therefore, the required consensus mechanisms are much more energy efficient.
Not unlike AI, a blockchain can also be coded with certain automated processes to augment its recordkeeping abilities, and, arguably, it is these types of processes that contributed to blockchains rise. That rise, some may say, began with the introduction of the Ethereum network and its engineering around smart contracts a term used to describe computer code that automatically executes all or part of an agreement and is stored on a blockchain-enabled platform. Smart contracts are neither contracts in the sense of legally binding agreements nor smart in employing applications of AI. Rather, they consist of coded automated parameters responsive to what is recorded on a blockchain. For example, if the parties in a blockchain network have indicated, by initiating a transaction, that certain parameters have been met, the code will execute the step or steps triggered by those coded parameters. The input parameters and the execution steps for smart contracts need to be specific the digital equivalent of if X, then Y statements. In other words, when required conditions have been met, a particular specified outcome occurs; in the same way that a vending machine sells a can of soda once change has been deposited, smart contracts allow title to digital assets to be transferred upon the occurrence of certain events. Nevertheless, the tasks that smart contracts are currently capable of performing are fairly rudimentary. As developers figure out how to expand their networks, integrate them with enterprise-level technologies and develop more responsive smart contracts, there is every reason to believe that smart contracts and their decentralized applications (dApps) will see increased adoption.
AI and blockchain technology may appear to be diametric opposites. AI is an active technology it analyzes what is around and formulates solutions based on the history of what it has been exposed to. By contrast, blockchain is data agnostic with respect to what is written into it the technology bundle is largely passive. It is primarily in that distinction that we find synergy, for each technology augments the strengths and tempers the weaknesses of the other. For example, AI technology requires access to big data sets in order to learn and improve, yet many of the sources of these data sets are hidden in proprietary silos. With blockchain, stakeholders are empowered to contribute data to an openly available and distributed network with immutability of data as a core feature. With a potentially larger pool of data to work from, the machine learning mechanisms of a widely distributed, blockchain-enabled and AI-powered solution could improve far faster than that of a private data AI counterpart. These technologies on their own are more limited. Blockchain technology, in and of itself, is not capable of evaluating the accuracy of the data written into its immutable network garbage in, garbage out. AI can, however, act as a learned gatekeeper for what information may come on and off the network and from whom. Indeed, the interplay between these diverse capabilities will likely lead to improvements across a broad array of industries, each with unique challenges that the two technologies together may overcome.
[1] See Rachel Metz, Why We Cant Quit the QWERTY Keyboard, MIT Technology Review (Oct. 13, 2018), available at: https://www.technologyreview.com/s/611620/why-we-cant-quit-the-qwerty-keyboard/.
[2] Alexis Madrigal, The Lies Youve Been Told About the Origin of the QWERTY Keyboard, The Atlantic (May 3, 2013), available at: https://www.theatlantic.com/technology/archive/2013/05/the-lies-youve-been-told-about-the-origin-of-the-qwerty-keyboard/275537/.
[3] See Metz, supra note 1.
[4] See Artificial Intelligence, Merriam-Websters Online Dictionary, Merriam-Webster (last accessed Mar. 27, 2019), available at: https://www.merriam-webster.com/dictionary/artificial%20intelligence.
[5] See Andrew Ng, What Artificial Intelligence Can and Cant Do Right Now, Harvard Business Review (Nov. 9, 2016), available at: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now.
[6] Louise Matsakis, Artificial Intelligence May Not Hallucinate After All, Wired (May 8, 2019), available at: https://www.wired.com/story/adversarial-examples-ai-may-not-hallucinate/.
[7] Id.
[8] Jerry Kaplan, Opinion: Why Your AI Might Be Racist, Washington Post (Dec. 17, 2018), available at: https://www.washingtonpost.com/opinions/2018/12/17/why-your-ai-might-be-racist/?noredirect=on&utm_term=.568983d5e3ec.
[9] See Shanaan Cohsey, David A. Hoffman, Jeremy Sklaroff and David A. Wishnick, Coin-Operated Capitalism, Penn. Inst. for L. & Econ. (No. 18-37) (Jul. 17, 2018) at 12, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3215345##.
[10] See Bitcoin Energy Consumption Index (last accessed May 13, 2019), available at: https://digiconomist.net/bitcoin-energy-consumption.
[View source.]
See the rest here:
The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other - JD Supra
Enterprise AI Books to Read This Spring – DevOps.com
Posted: at 1:41 pm
If you are anything like me, and curious about how new enterprise tools will transform our relationship with work and business-technology, then you are probably reading a lot about artificial intelligence. Theres likely no other topic thats been written about in more depth in recent years, yet much of it is unrealistic or poorly reasoned hype.
This is why Ive been reading more books from trusted authors on the subject. In previous years, Ive read books such as Erik Brynjolfssons and Andrew McAfees Race Against the Machine, and Martin Fords book Rise of the Robots: Technology and the Threat of a Jobless Future. Both were great books that brilliantly framed AI. Last year, I wasnt as fortunate.
Last year, my reading included The Master Algorithm by Pedro Domingos. It was an interesting look at machine learning algorithms, and specifically the quest for the algorithmthe master algorithmthat would create all future algorithms without the need for humans. This book proved a disappointment because the author seemed as convinced, almost dogmatically so, that general AI success was as close around the next corner as the AI pioneers of the 70s and 80s believed success for them was around the next corner.
The next book up for me was Life 3.0 by Max Tegmark. While this was a good read, it proved to be too much of an analysis of what has been already written than a book that significantly pushed the subject of AI forward in a new or unique way.
Hopefully, my next batch of AI reads will prove better. I realize Im playing catch-up here, as most of these arent new books. But they do look to be important books on the subject.
Heres what I currently have lined up.
Author: Kai-fu Lee.
Following the defeat of the worlds top player of the game Go, to Googles AlphaGo AI, the government of China set ambitious plans to become the global AI hub by 2030. In his book, Kai-fu Lee contends that China has quickly caught up to the U.S. and that dramatic changes from AI developments are happening much more quickly than many expect.
In his book, Lee examines universal basic income, and examines what jobs may be enhanced with AI the possible solutions to the biggest changes AI promises to bring to us all.
Authors: Peter Norvig and Stuart Russell.
At 1152 pages, Artificial Intelligence: A Modern Approach (due out this April) isnt a light read. The latest edition of this book looks across the entire field of AI, and a deep dive into machine learning, deep learning, transfer learning, multi-agent systems, robotics, natural language processing, causality, probabilistic programming and more.
I have high hopes for this book, as its treatment appears to be exactly what Im looking forward to reading: an overview of the state of AI without going too light on the treatment of each topic.
Authors: Mariya Yao, Adelyn Zhou and Marlene Jia.
This book is made the list because its AI as it can be applied to business-technology. It promises to be a roadmap on how to use data, technology, design and staff to solve enterprise business problems.
We teach you how to lead successful AI initiatives by prioritizing the right opportunities, building a diverse team of experts, conducting strategic experiments and consciously designing your solutions to benefit both your organization and society as a whole.
Thats exactly what enterprises need to do to succeed at integrating AI into their companies in order to get value from this growing enterprise technology.
Authors: Andrew McAfee and Erik Brynjolfsson.
I learned and enjoyed both Race Against the Machine and The Second Machine Age. The latest book from MIT Principal Research Scientist Andrew McAfee and Director of the MIT Center for Digital Business Eric Brynjolfsson, looked at the impact of machine intelligence and big data. In this book, the authors look at another form of augmented intelligence: our collective intelligence, and what it means for transportation, medical research, financial services and more.
Thats it for the reading list for now. Id appreciate hearing what AI-related business books you are reading this spring and summer.
George V. Hulme
Originally posted here:
Chess grandmaster Gary Kasparov predicts AI will disrupt 96 percent of all jobs – The Next Web
Posted: February 25, 2020 at 1:45 am
IBMs Deep Blue wasnt supposed to defeat Chess grandmaster Gary Kasparov when the two of them had their 1997 rematch. Computer experts of the time said machines would neverbeat us at strategy games because human ingenuity would always triumph over brute-force analysis.
After Kasparovs loss, the experts didnt miss a beat. They said Chess was too easy and postulated that machines would never beat us at Go. Champion Lee Sedols loss against DeepMinds AlphaGo proved them wrong there.
Then the experts said AI would never beat us at games where strategy could be overcome by human creativity, such as poker. Then AI beat us at poker. And atStarCraft. Now its going for even more complex games such as Magic: The Gathering. History shows us that a machine can be developed to outperform a human at any given task.
Apparently the only thing more powerful than humanhubris is our ability to iterate new technologies. At least according Kasparov, who recently told Wireds Will Knight:
1997 was an unpleasant experience, but it helped me understand the future of human-machine collaboration. We thought we were unbeatable, at chess, Go, shogi. All these games, they have been gradually pushed to the side
History repeated itself in 2016 when Sedol lost to AlphaGo. Much like Kasparov, Sedol was shocked at his defeat by a machine. He retired from competitive Go play in 2019 citing the fact that even if he were to become the number one player again, AI is an entity that cannot be defeated.
But Kasparov, whos had more than two decades to reflect on his loss, sees artificial intelligence as an opportunity for collaboration, not a future overlord or oppressor. He predicts that 96 percent of all human jobs (those not specifically requiring human creativity) will be destroyed by AI in the coming years.
Read:This AI suitcase could help visually-impaired people to travel
This isnt a gloomy prediction, as Kasparov told Wired, with every age of major technological advancement the majority of jobs are disrupted and new ones are created. Thats why even small towns have half a dozen automobile mechanics but youll be hard-pressed to find your local farrier or blacksmith unless you live in a farming community or the world of Skyrim.
The real challenge, according to Kasparov, is with creating new jobs that cater to human creativity:
For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are. We have to look for opportunities to create jobs that will emphasize our strengths. Technology is the main reason why so many of us are still alive to complain about technology. Its a coin with two sides. I think its important that, instead of complaining, we look at how we can move forward faster.
Kasparovs prescription for the future seems to be that we should speed things up. He wants us to develop AI faster because technology is inevitable. And, as the self-professed firstknowledge worker whose job was threatened by a machine, he has a unique perspective.
But speeding things up will, ultimately, displace workers whove been able to rely on their skill-set so far. This leaves CEOs and governments in a predicament where the greater good may ultimately come from ditching humans for machines, but the immediate impact would leave a significant percentage of workers with no means of income.
To this end, Kasparov mentioned that it might be time for governments to consider a universal basic income (UBI). And hes not the only person advocating for a UBI to combat the threat of AI-caused worker displacement.
Former US presidential candidate Andrew Yang ran his now defunct campaign on the promise that hed usher the country into the AI era by providing a monthly freedom dividend of $1,000 dollars to every adult citizen.
What do you think? Is your job AI-proof?
Youre here because you want to learn more about artificial intelligence. So do we. So this summer, were bringing Neural toTNW Conference 2020,where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses.Get your early bird ticketandcheck out the full Neural track.
Published February 24, 2020 21:00 UTC
Read the original:
Chess grandmaster Gary Kasparov predicts AI will disrupt 96 percent of all jobs - The Next Web
The top 5 technologies that will change health care over the next decade – MarketWatch
Posted: at 1:45 am
The past decade was about the rise of digital health technology and patient empowerment. The next decade will be about artificial intelligence, the use of health sensors and the so-called Internet of Healthy Things and how it could improve millions of lives.
The cultural transformation of health care we call digital health has been changing the hierarchy in care into an equal-level partnership between patients and physicians as 21st century technologies have started breaking down the ivory tower of medicine. But these milestones are nothing compared with what is about to become reality.
With advancements in exoskeleton technology, AIs ever-increasing importance in health care, and technologies like 5G and quantum computing soon going mainstream, theres much to be excited about.
Here are the five biggest themes for health and medicine for the next 10 years.
Artificial intelligence in medicine
Developments in artificial intelligence will dominate the next decade. Machine learning is a method for creating artificial narrow intelligence -- narrow refers to doing one task extremely well and a field of computer science that enables computers to learn without being explicitly programmed, building on top of computational statistics and data mining. The field has different types: it could be supervised, unsupervised, semi-supervised or reinforcement learning, among others. It has an unprecedented potential to transform health-care processes and medical tasks in the future and it has already started its invisible revolution.
If we consider how AlphaGo, the AI developed by Googles DeepMind lab, beat world champion Lee Sedol at the classic Chinese game Go by coming up with inventive moves that took experts by surprise, we can get a glimpse at what AI can hold for health care. Such moves were made possible by the combination of neural networks and reinforcement learning that this AI uses. This enabled the software to operate without the restrictions of human cognitive limitations, devise its own strategy and output decisions that baffled experts.
We can expect to see the same surprises in medical settings. Imagine new drugs designed by such algorithms; high-level analysis of tens of millions of studies for a diagnosis; or drug combinations nobody has thought of before. When applied to medicine, an algorithm trained via reinforcement learning could discover treatments and cures for conditions when human medical professionals could not. Cracking the reasoning behind such unconventional and novel approaches will herald the true era of art in medicine.
In global health, for example, an algorithm can provide a reliable map of future measles outbreak hot spots. It uses statistics on measles vaccination rates and disease outbreaks from the Centers for Disease Control and Prevention, as well as non-traditional health data, including social media and a huge range of medical records. Thats just one example, but the field is already buzzing with smart algorithms that can facilitate the search for new drug targets; improve the speed of clinical trials or spot tumors on CT scans.
However, while experts believe that AI will not replace medical professionals, it also seems true that medical professionals who use AI will replace those who dont.
A myriad of health sensors
Medical technology went through an amazing development in the 2010s, and theres now no single square centimeter of the human body without quantifiable data. For example, AliveCors Kardia and Apple Watch measure ECG and detect atrial fibrillation with high sensitivity. The EKO Core digital stethoscope records heart and lung sounds as a digital stethoscope, while blood pressure is monitored with the Omron Blood Pressure Smartwatch, the MOCAcare pocket sensor, and blood pressure cuff, the iHealth Clear, the Skeeper, a pocket cardiologist, or the Withings Blood Pressure Monitor, and of course, dozens of traditional blood pressure cuffs.
There are dozens of health trackers for respiration, sleep and, of course, movement. And while researchers cant decipher your dreams yet they are working on it, alongside figuring out all kinds of brain activity. For example, through EEG. Thats a method that records electrical activity in the brain using electrodes attached externally to the scalp. The NeuroSky biosensor and the Muse headband use it to understand the mind better and in the latter case allow for more effective meditation. As you see, theres not much left unmeasured in your body and it will even intensify in the future. For example, we expect digital tattoos to become commercially available within five years, which will not only measure the majority of the above-mentioned vital signs, but they will do so continuously. These tiny sensors will notify us when something is about to go wrong and we will need medical advice or intervention.
Moreover, with developments in 3-D printing as well as circuit-printing technologies, flexible electronics and materials, applying so-called digital tattoos or electronic tattoos on the skin for some days or even weeks became possible.
Made of flexible, waterproof materials impervious to stretching and twisting coupled with tiny electrodes, digital tattoos are able to record and transmit information about the wearer to smartphones or other connected devices. While these are only in use in research projects, they could allow health-care experts to monitor and diagnose critical health conditions such as heart arrhythmia, heart activities of premature babies, sleep disorders and brain activities noninvasively. Moreover, by tracking vital signs 24 hours a day, without the need for a charger, it is especially suited for following patients with high risk of stroke, for example. Although we are not there yet, there are certain promising solutions on the market such as MC10s BioStampRC Sensor.
Quantum computing puts medical decision-making on a new level
In 2019, Google claimed quantum supremacy and made the cover of Nature magazine. One example of how this technology will have a major impact on the health-care sector is quantum computing taking medical decision-making to a whole new level and even augmenting it with special skills. What if such computers could offer perfect decision support for doctors? They could skim through all the studies at once, they could find correlations and causations that the human eye would never find, and they might stumble upon diagnoses or treatment options that doctors could have never figured out by themselves.
At the very endpoint of this development, quantum computers could create an elevated version of PubMed, where information would reside in the system but not in the traditional written form it would reside in qubits of data as no one except the computer would read the studies anymore.
In addition, the applications of quantum computing to health care are manifold, ranging from much faster drug design to quicker and cheaper DNA sequencing and analysis to reinforced security over personal medical data. While the technology does hold such promises, we still have to be patient before practical solutions can be implemented in medicine. However, with continued progress in this area, even though quantum computing has been something from a science fiction novel, this decade will see the first such computer used in the clinical practice too.
Chatbots as the first line of care
Symptom checkers that function on the same principle as chatbots are already available, free of charge. However, these rely on the user inputting symptoms and complaints manually. We yearn for one that can make predictions and suggestions based on a users data, like sleep tracking, heart rate and activity collected via wearables. With such features, those bots can help users make healthier choices without having to drag themselves to their doctor.
There was a Black Mirror episode titled Rachel, Jack, and Ashley Too that featured an incredibly smart and emotional chatbot that had human-like conversations with the character. Think about having a similar personalized chatbot thats accessible via your smart device and with additional health and lifestyle features. This chatty virtual being can wake you up at the appropriate time based on your sleep pattern and advise you to take your antihistamines as the pollen concentration is particularly high during your commute that day, before you even get out of bed. It can even recommend what you should consume for each meal based on your nutrigenomic profile. It could find the best words for you to motivate you to go to the gym. It could find the best jokes that help you get into a good mood. But would you rather bend to the rules of an AI, essentially forgoing your freedom of choice, than experience life based more on your own will?
5G serving the whole ecosystem of digital health
5G networks will enable data to be downloaded at more than 1 gigabit per second (1gb/s), allowing for downloads 10 to 100 times faster than the currently available 4G services. 4G networks can only serve around a thousand devices within a square kilometer, while 5G can serve a million. It will make the era of the Internet of Things possible by connecting a huge amount of health trackers with laptops, smartphones and many more digital devices. There will be no connection issues or latency, as the trackers will be able to work in harmony while getting the most out of our data.
Such a boost will allow for more reliable communication, which is a must in areas like telesurgery, remote consultation and remote monitoring. With bigger bandwidth and faster connection, there might be a boost in wearables as health IoT networks become more stable and reliable, and further help with patient engagement in relation to their health.
Major applications of 5G are expected to be apparent starting in 2021.
Dr. Bertalan Mesko, Ph.D., is The Medical Futurist and director of The Medical Futurist Institute, analyzing how science fiction technologies can become reality in medicine and healthcare. As a geek physician with a Ph.D. in genomics, he is a keynote speaker and an Amazon Top-100 author.
Read more:
The top 5 technologies that will change health care over the next decade - MarketWatch
How to overcome the limitations of AI – TechTarget
Posted: February 20, 2020 at 9:45 am
The 2010s gave rise to a number of tech bubbles, and the threat of those bubbles bursting in 2020 is resurfacing nightmares for some in the tech community of dot-com-era busts. One such bubble could be AI.
Yet, some of today's most successful tech companies -- Google chief among them -- grew out of the shattered landscape of the post-dot-com tech scene. The same pattern could play out in AI. Even if the current AI bubble does burst, there will most likely continue to be successful companies offering impactful tools.
Some experts claim we are in an "AI autumn," as the technology that once was feared for its potential to wipe out broad swaths of jobs has fallen short of its expected potential in many categories. Yet, underestimating the benefits of AI is a huge mistake, as various machine learning technologies are already providing value to businesses. But, given the limitations of AI, how can we get to a future where the technology has the world-changing impact that was previously expected?
Google's AlphaGo Zero forced the current world champion of the game Go into early retirement. In Lee Se-dol's own words, AI is "an entity that cannot be defeated." Using reinforcement learning, the AI played millions of games against itself at superhuman speed -- a number humans can't match in a lifetime of gameplay. The hardware costs for AlphaGo Zero also go up to $25 million.
However, the new world champion would fall flat on its face with the tiniest change to the game's rules. It also can't use its knowledge to master any other game. Humans are superior at applying existing knowledge to new tasks with limited data. This is something most AI pioneers agree upon.
"Current AI algorithms have enormous data requirements to learn the simplest tasks, and that puts a strict restriction on where they can be applied," said Abhimanyu, co-founder and CEO of Agara, which analyzes voice with the aid of AI to augment customer support operators. "While neural networks show superhuman performance, their predictions are sometimes wildly incorrect, so much that a human would never make a similar mistake."
As Jerome Pesenti, Facebook's head of AI, put it: The AI field is about to hit the wall. This begs the question: How can we build smarter AI?
Many experts believe improvements in hardware and algorithms are necessary to break that wall. Some even suggest that we need quantum computers.
Though deep learning and neural networks were developed to mimic how our neurons communicate, there is still much we don't know about the brain's inner workings, which outperforms thousands of CPUs and GPUs.
"Even our supercomputers are weaker than the human brain, which can run 1 exaflop calculations per second," Abhimanyu said. "However, since our algorithms have a long way to improve, it's hard to predict how much computation power we'd need."
More processing power doesn't generally equate to more intelligence. We can see this in the brainpower of various animals.
"As a simple proof point, there are animals with both much bigger brains and moreneurons than humans have," said Alan Majer, CEO and founder of AI and robotics development company Good Robot."So, if we wait for some kind of hardware tipping point, we're likely to be disappointed."
Recognizing the limitations of AI is the best thing we can do for the developing technology. While we are far off from human-level intelligence, companies are taking innovative approaches to overcome those constraints.
Explainable AI is one important approach.
AI has traditionally operated as a black box where the user feeds the questions and the algorithm throws out the answers. It was born from a need to program complex tasks, and no programmer was able to code all the logical decision variations. Thus, we let the AI learn by itself. However, this is about to change.
"Explainable, cognitive AI builds trust with people so humans and machines can work together in a collaborative, symbiotic way," said A.J. Abdallat, CEO of machine learning development company Beyond Limits. "Because explainable AI technologies are educated with knowledge, in addition to being trained with data, they understand how they solve the problem and the context that makes the information relevant."
The higher the potential stakes, the more important it is to know why AI arrived at a certain answer. "For example, NASA will not implement any system where you cannot explain how you got the answer and provide an audit trail," Abdallat explained.
Explainable AI gives us insight into the AI's decisions, improving the human-machine collaboration. However, this method does not work in all scenarios.
Consider self-driving cars, one of the benchmarks of our AI intelligence level. In fully autonomous vehicles, human operators are not enabled to aid the machine in split-second decisions. To solve this problem, experts adopt a hybrid approach.
"Waymo uses deep learning to detect pedestrians, but lidar and hardcoded programming add a safety net to prevent collisions," Abhimanyu explained. Developers use individual components that are not smart per se but can achieve smarter results when they are combined. By creating a smart design, developers challenge our understanding of the limitations of AI.
"The Google Duplex demo that amazed people is a really smart design coupled with state-of-the-art technology in speech-to-text and text-to-speech categories, which exploited what people look for in a smart human," Abhimanyu explained.
But these chatbots fail when it comes to natural conversations, which is still a challenging domain for AI. As an example, let's consider one of the major achievements in the past year, GPT-2, which stunned many with its content writing capabilities.
"GPT-2 can generate entire essays, but it is very hard to make it generate exactly what you want reliably and robustly in a live consumer setting," Abhimanyu shared. GPT-2 was trained on a huge library of quality documents from the internet, so it could predict what words should naturally follow a sentence or paragraph. But it had no idea what it was saying, nor could it be guided toward a certain direction. Experts believe being able to reliably and extensively control AI could mark the next step in our advancements.
The current AI algorithms were made possible on the back of big data -- that's why achieving this level of intelligence was not possible even with the best supercomputers decades ago. We are incrementally finding the next building blocks for smarter AI. Until we reach there, the most productive use of AI is on narrow domains where it outperforms humans.
Go here to see the original:
Levels And Limits Of AI – Forbes
Posted: at 9:45 am
Levels and Limits of AI.
I recently spoke with the innovation team of a Fortune 50 company about their 2020 initiatives, one of which was artificial intelligence. When I asked what specifically they want to use AI for, an executive replied, Everything. I pushed a little more asking, Are there any specific problems that youre seeking AI vendors for? The reply was something like We want to use AI in all of our financial services groups. This was particularly unsatisfying considering that the company is a financial services company.
I have these kinds of conversations frequently. For example, I met with the head of a large government department to discuss artificial intelligence, and their top agency executive asked for a system that could automate the decision-making of key officials. When the executive was asked for details, he more or less wanted a robotic version of his existing employees.
AI is not a panacea and it cannot simply replace humans. Artificial intelligence is mathematical computation, not human intelligence, as I have discussed in previous posts. One of my key roles as an investor is separating real AI from AI hype.
Buyers should not focus on whether or not a company is AI, but rather whether or not it solves a real problem. While technology is important, the most important part of any company is serving the customer. There are specific customer needs that artificial intelligence can address really well. Others, not so much. For example, AI may be well suited to detect digital fraud, but it would not be well suited to be a detective in the physical world. AI should be treated like any other software toolas a product that needs to yield a return. To do so, it is important to understand what artificial intelligence can actually do, and what it cant.
There are several levels of artificial intelligence. A few years ago my friends John Frank and Jason Briggs, who run Diffeo, suggested breaking artificial intelligence into 3 levels of service: Acceleration, Augmentation, and Automation. Acceleration is taking an existing human process and helping humans do it faster. For example, the current versions of textual auto-complete that Google offers are acceleration AI. They offer a completed version of what the user might already say. The next level, augmentation, takes what a human is doing and augments it. In addition to speeding up what the human is doing (like acceleration), it makes the humans product better. An example of this is what Grammarly does with improving the grammar of text. The final level is automation. In the previous two levels there are still humans in the loop. Automation achieves a task with no human in the loop. The aspiration here is Level 5 autonomous driving like Aurora and Waymo are pursuing.
When evaluating AI companies it makes sense to ask if what they are setting out to achieve is actually attainable at the level of AI that the vendor is promising. Below is a rough demonstrative chart with the Difficulty of AI on the y-axis and Level of AI on the x-axis.
The dashed line is what I call the AI feasibility curve. Within the line is AI feasibility, which means that there is a technology, infrastructure and approach to actually deliver a successful product at that level of AI in the near term. In reality it is a curve, not a line, and it is neither concave nor convex. It has bumps. Certain problems are really difficult, but are attainable because a spectacular AI team has worked really hard to "push out the AI feasibility curve for that specific problem. AlphaGo is included because it was an incredibly difficult and computationally intensive task, but the brilliant team at Google was able to shift the curve out in that area. If a company proposes it has built a fully-autonomous manager or strategy engine, I become highly skeptical. As you can see, the AI difficulty of those two tasks is quite high. The difficulty of AI is some function of the problem space and data quality (which I will discuss in a future article). In the chart, treat difficulty for AI as a directional illustration of difficulty, not a quantifiable score.
When purchasing a vendors AI, determine if its value proposition is feasible. If it is not, then the return on investment may be a disappointment. Look to whether it is being marketed as fully automated, but is too difficult a problem for full automation- this could be a sign that the product is actually accelerated AI. Keeping the feasibility curve in mind is important for investing as well, because if the customer is not well served, then the company will eventually fail.
When evaluating a company, I try to determine where on this chart the company would fall. If it is still building out product, I think about its technology innovation. Will the engineers be able to shift out the curve in that particular problem space? In evaluating AI, pick products that you can know with confidence will provide ROI. Dont be like the Fortune 50 that is looking for AI for everything or the government agency trying to have AI that basically does exactly what one of its officers does. Instead, evaluate an AI product for what it really offers you. Then, make an informed decision.
Disclosure: Sequoia is an investor in Aurora and author is an investor in Alphabet, both of which were used as examples in the article.
Continued here:
From Deception to Attrition: AI and the Changing Face of Warfare – War on the Rocks
Posted: at 9:45 am
Editors Note: This article was submitted in response to the call for ideasissued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part a.) on how artificial intelligence might affect the character and/or the nature of war?
Ernest Swintons The Defense of Duffers Drift educated a few generations of military professionals in the West. It introduces a set of tactical lessons through six dreams seen by Lt. Backsight Forethought, who was responsible for the defense of a river crossing in the Boer War. Every dream unraveled combat situations triggered by the lieutenants previous tactical decisions. They revealed his mistakes, allowing him to gradually improve the defense. This is similar to how an artificial intelligence-powered system might evaluate combat scenarios. The only difference is that it would do millions of these evaluations, instead of six, would consider much more information, and would do it at incredible speed. This is why artificial intelligence (AI) is likely to revolutionize warfare: It could qualitatively improve the key factors in war human strategizing and decision-making speed.
Assessments and predictions on the future of warfare, as is the case with any type of forecasting, are demanding endeavors by default. One of the challenges is the difficulty of resisting the temptation to make linear projections of past experience onto the expected future. Especially since new technological developments seem to be rendering obsolete our older theories about conflict. Forecasting warfare evolution under the impact of AI is thus a formidable task. An approach that can reduce errors is to combine two sets of knowledge: understanding accurately the micro-dynamics of war (the most basic drivers of combat interaction) and assessing AI impact on war considering AI-specific abilities instead of humans innate limitations.
The Micro-Dynamics of War
Traditionally, military and security experts have perceived warfare in Clausewitzian terms, through preponderance of force and large uncertainty. This view has been determined by humans physical and cognitive limitations rather than by the fundamental dynamics of war. Humans are not the quickest and most agile creatures. They cannot swiftly move away from a bullets trajectory. They are also not the sharpest shooters. It took about 10,000 rounds to produce a casualty in World War II. This number was even higher during the U.S. counterinsurgency operations in Afghanistan and Iraq.
We learned to compensate for accuracy failures by relying on automatic fire or indiscriminate artillery strikes. Because we are not quick enough in decision and evasion movement, we chose the path of armor building. However, the fundamental element of combat dynamics is not firepower or fire accuracy. Based on my dissertation research, I argue that it is the manipulation of ones relative degree of lethal exposure in combat. And it is by better exploiting lethal exposure that AI systems will affect the future of warfare. Capabilities are just a reservoir from which a side can supply its combat efforts. A side with more troops and more weapons will have a larger reservoir.
Lethal exposure has always been the invisible hand of war, as it practically has complete control over how power preponderance can influence war outcomes. The currently dominant view on warfare emphasizes the importance of attrition of forces as the underlying combat dynamics. The proposed exposure-driven concept of combat brings instead to prominence the critical role played by the speed of attrition of ones troops and weapons for military effectiveness and victory.
To better understand the logic of lethal exposure, let us consider a classical combat analysis scenario with interacting Blue and Red forces. Blues exposure to Reds lethal fire and thus attrition depends on its ability to avoid identification by Red. Basically, Blues degree of exposure becomes dependent upon Reds knowledge about Blues location. At minimum, exposure is determined by two factors: the physical presence of Blue in Reds range of fire and the knowledge of Red about Blues exact location inside its targeting range. It is the quality that reflects the degree to which a sides combat-engaged troops are subject to lethal fire of its competitor, resulting in a varying level of attrition of its troops. To use a more familiar expression, combat exposure of a military unit is the degree of its stealth.
It is not possible to strike an opponent beyond ones weapons reach, no matter how accurate one is. And even inside the targeting range, it is hardly possible to hit the opponent without knowing its location. An extreme illustrative case is the example of a perfect ambush when the ambusher has the opponent in its weapons sights, while the ambushed has no clue about it. The exposure is close to zero for the ambusher and is maximum for the ambushed. The degree of exposure of forces can completely block the impact of capabilities and fire accuracy on troops attrition speed. It is when the value of exposure is very high like in hand-to-hand combat, or trench warfare that the effects of capabilities and fire accuracy are maximized. This logical illustration of combat dynamics allows us to open the black box of combat uncertainty.
To see this, consider two fighting forces, one having 100 units and another one having 30 units. Keeping the conventional combat-related factors and conditions equal, numerical superiority will bring victory. However, as we unilaterally decrease the smaller forces degree of lethal exposure (through various tactical moves or technology), its speed of combat attrition will start diminishing too. In comparison to the unchanged attrition of the larger force, the smaller forces speed of attrition will eventually drop to a threshold value, when the larger force keeps degrading, by contrast, much quicker. But not just quicker. This threshold value, at which the larger side faces a higher speed of proportional attrition (when its X percent of troops and weapons degrade quicker than the respective X percent of the smaller combatants forces) can be rightfully labeled as the margin of military victory. Technically, in order to remain alone on the battlefield after combat, the 30-large force has to reach a kill rate higher than about 3.34 units of its opponent for each of its own destroyed units.
The described combat micro-dynamics allow even a belligerent with considerably fewer forces to win combat engagements and consequently wars against a militarily more powerful opponent.
Manipulating the degree of exposure requires advantage in decision-making and movement speed as well as environmental awareness. One can decrease own exposure while instantly increasing the opponents exposure in combat, by having a good knowledge about the combat environment including location of the opponent achieving a quick decision-making speed and being able to move faster on the battlefield. AI-driven combat systems could have crucial impacts in all three domains and change military strategy.
AIs Capabilities in Warfare
In September 2019 the Russian military conducted the strategic exercise Tsentr-2019, allegedly testing an AI-based command, control, communication, computers, intelligence, surveillance and reconnaissance system. According to Russian open sources, the new information system of combat command and control gathers combat-related information from various sources in real time, assesses combat scenarios, and provides the commanders with a ranked list of combat mission decisions and an assessment of resulting scenarios.
While still at a rudimentary level, this application transformed AI from an enabler into an actor in the combat environment. If its recommendations are implemented by the human commanders, it makes little difference whether AI directly commands automatic combat units or if it achieves this through human fighters although the second scenario is still much slower. We may be witnessing the birth of an automatic battlefield commander.
Would such an AI system actually be able to conduct combat assessment and mission planning better than a trained human professional? Claims that the military context generates unique challenges may not necessarily be true for AI. We should think about AI as another actor with its own specific capabilities: similar to an alien, who sees our environment outside of the human visible spectrum. An actor that operates by simplifying and stripping a problem down to its essential elements in its search for optimally tailored solutions. A sound way of exploring this issue has been advocated by Iyad Rahwan of the Max Planck Institute for Human Development. Rahwan suggested taking an anthropological approach since AI systems have become so complex that we cannot understand and predict what they will do, we should instead observe their behavior in the wild.
This would allow us to perceive the combat environment the way AI will see it. The beginning of this discussion aimed to propose a most likely candidate for AI perception of combat, driven by lethal exposure. AI systems are optimizers, trying to fulfill a task based on a set of incentives. The accuracy of their work depends on the quality of data that humans feed into AI. As technical progress evolves, the evolution of sensors that allow AI combat systems to directly connect to the environment including counter-battery fire, other radars, video cameras, electro-optics, or even satellite observation will solve the data collection problem and increase the output quality of the AI-driven military systems. Because exposure, as described earlier, is the key driver of combat uncertainty dynamics, AI will identify it through data analysis and focus on instrumentalizing it.
Despite skepticism among researchers and policy practitioners, including claims that you cant AI your way out of physics, AI showed capacity to outperform the best human strategists not only in perfect information games such as Chess and Go. A revelatory example is the recent development of Pluribus AI, which is able to win against elite professional players in a multi-player poker game an imperfect-information strategic interaction. Moreover, researchers ingeniously reduced the games high complexity, solving the problem of too many decision points by bundling situationally similar ones into groups, treating then all decisions in a group as identical.
Together with the poker example, which is arguably the most sophisticated, there is too much evidence to doubt the ability of AI to solve complex problems better than humans. For instance, in the biotech industry AI is not just a tool. It designs experiments, carries them out, and interprets their results. The genetic changes that AI systems generate in these labs represent discoveries that human scientists would likely not have identified. According to scientists working in the industry, some of the AI-created genes have no human-known functions.
Moreover, while winning in some of the most complex strategic games, AI revealed an insightful trait. Defeating one of the worlds best Go players, Lee Sedol, DeepMinds AlphaGo AI player made a move that reportedly caught Sedol and other top players by surprise a move that no human would ever do. Facing another of DeepMinds AI systems, AlphaStar, one of the worlds best players in StarCraft 2 pointed out that AI explores strategies in ways different from what a human player would do. Capabilities have an instrumental effect, as they drive behavior.
Unlike humans, AI is better equipped to explore the exposure-manipulated speed of attrition concept, which is arguably the most basic, irreducible dynamic of combat. It arguably suggests the most effective solution in combat interaction the Nash equilibrium of military strategies as whatever the other side does, reducing ones speed of attrition while increasing that of the opponent is the best possible response in combat.
Harnessing the power of modern computing, AI is able to explore an enormously larger part of the probability space all possible developments in its environment. In contrast, to do the same task, humans incur significant cognitive costs and try to reduce them by searching the space of options for the solution that is closest to them. Imagine a solutions landscape, where we are surrounded by various hills, each of their heights representing a solutions optimality level. It is very likely that a moderately high decision hill in our surroundings represents a better solution, but we cannot see it, as a different solution hill that is closer to us covers our eyesight. We will stop at this particular solution, while AI would have a wider vision and therefore will identify and explore the better options.
Providing AI with the logical shortcut of ready-built models will compensate for the debilitating shortage of big data on combat dynamics. Using combat simulations that explore high-fidelity micro-dynamics of war similar to the suggested exposure-guided speed of attrition concept researchers can train effective AI combat systems. Thus, AI battlefield commanders able to outperform human experts can be developed even given the shortage of combat-related data. This will also help us understand how AI deals with combat uncertainty, to be able to know how to instrumentalize this uncertainty for exploring AIs weak sides.
Food for Thought
By implementing the two listed conditions of this AI-empowering approach, international actors will end up in a real AI arms race a race in which they will compete not for the most powerful weapon, but for the most optimized AI-driven battlefield commander algorithm. This will give tremendous advantages in combat, harvesting the AIs decision-making speed and optimized strategies.
Secondly, to fully explore AIs potential by avoiding the transaction costs of the human interface, a few accompanying technological developments are necessary. A demand will emerge for specific hardware applications sensors, exoskeletons, micro-turbine engines, robotic systems, and unmanned vehicles that could best harvest AIs decision-making agility and reaction speed potential.
Third, the use of lethal force will develop to become more accurate, quick, and localized almost surgical. Due to AIs expected ability to effectively discriminate among various types of targets in combat, military AI systems would likely offset the equalizing effect between regular armies and insurgents that automatic rifles made possible, likely due to restricting effects of international humanitarian law.
Fourth, given the resulting intensity and speed of combat attrition, fighting will become faster, will destroy combat resources more quickly and in higher quantities, making wars costlier for the defeated. The way capabilities transform into victory will become again more transparent, so that not everyone will be able to afford and be willing to fight wars. Therefore, state actors with significant financial resources will likely gain advantage in crises.
What should the U.S. government do? Because AI technologies that can be used in the military domain are readily available in the commercial sector, banning the use of AI in warfare becomes impractical. The United States should invest more in studying AI behavior in warfare. Similar to how scientists evolve and study deadly viruses to build vaccines against them, the U.S. government should encourage the laboratory-based anthropological study of AI behavior in virtual combat environments to understand how it thinks. This would enable understanding of the opaque functioning of AI opening its black box and would prepare policymakers to better address potential unanticipated consequences of AI operations, also known as King Midas problem.
The U.S. government ought to encourage and fund disruptive thinking on combat dynamics, military strategy and grand strategy implications, accounting for the ways AI developments are going to remove many of the existing physical and intellectual limitations on how humans have been fighting wars.
Finally, the United States ought to prioritize investing into electromagnetic pulse technology that breaks the connection between the AI and the combat systems it controls, which upgrade AI from a virtual advisor into a physically present battlefield commander.
Dumitru Minzarari, PhD (University of Michigan), is a former military officer who served as state secretary for defense policy and international cooperation with the Moldovan Ministry of Defense, worked for the Organization for Security and Cooperation in Europe field missions in Georgia, Ukraine and Kyrgyzstan, and with several think tanks in Eastern Europe.
Image: U.S. Army (Photo by Conrad Johnson)
See the article here:
From Deception to Attrition: AI and the Changing Face of Warfare - War on the Rocks
I think, therefore I am said the machine to the stunned humans – Innovation Excellence
Posted: February 10, 2020 at 9:47 pm
by Dimis Michaelides
She didnt actually say it like that. She might have asked to be or not to be? and insisted that that is the question. In any case, machines are already up to asking profound existential questions.
On 11th February 2019 a very unusual event marked the latest man-versus-machine challenge: a debate on whether we should subsidize pre-schools or not. In a quite unique public gathering attended by hundreds of people, IBMs female-voiced AI system, formally known as Project Debater but lovingly nicknamed Miss Debater, opposed Harish Natarajan, a debating champion of international renown.
The human won, this time. What stole the show though was the sophistication of the debate and the outstanding argumentation of both contestants see highlights https://www.youtube.com/watch?v=nJXcFtY9cWY and the apparent humanity of Miss Debater.
Debating requires expert and general knowledge, reasoning, creative thinking, eloquent expression and the skilful appeal to emotions, all skills once considered unattainable by machines. No longer. In this case both the computer and its adversary showed ample evidence of mastering these competencies. Interestingly Natarajan was expecting emotional arguments from Miss Debater and it was his readiness to address these with spontaneity that helped him win. I am sure she has learnt from this and she is probably thinking about how to outdo her opponent next time.
Earlier milestones of human vs machine contests include the famous 1997 chess match in which IBMs computer system Deep Blue beat Gary Kasparov, a chess grandmaster. In 2011 IBMs Watson supercomputer defeated two record-winning contestants in Jeopardy, a general knowledge quiz. And in 2016 Alphabets AlphaGo famously proved artificial intelligence can master the ancient and intricate game of Go by beating the world champion in a set of five games. The machine had learnt to play and had developed many creative new moves on its own.
Machines did not beat humans at chess, Jeopardy and Go the first time round, indeed they learnt to win after having fought many losing battles. If AI can one day master complex philosophical reasoning, might that be the right moment to tell the likes of Zenon, Socrates and Descartes to get a real job?
Besides excelling in pure reason, it is becoming clear that superstar machines can really learn to display and manage emotions and be amazingly creative, faculties hitherto monopolized by humans. DeepMind, the company that designed AlphaGo, argue that their products are already doing this.
Artificial Intelligence is entering our lives in myriads of different ways, influencing sectors as diverse as transport, health, entertainment, politics and war. As we move from AI in fiction to AI in reality, there will be intended and unintended outcomes, so it is not too soon to consider the ethical and social dimensions of AI. A lawless AI future is frightening and potentially catastrophic. DeepMind, has already set up a new research unit, DeepMind Ethics & Society. Their goal is to fund research on privacy, transparency, fairness, economic impact, governance and accountability and managing AI risk.
For those of us whose dreams of radical innovation and digital transformation have stumbled upon obstinate people with unbending mindsets, AI will be a big bonus. Expect intelligent machines which, through their own learning, will be able to change themselves, at a much faster pace than those awkward humans.
Expect existential issues to troll and pollute our innocent debates on the value of innovation and entrepreneurship for mankind.
Expect arguments denying that we exist just because we think because not all machines will be Cartesian.
And expect to be questioned by the ghost of a somewhat deranged prince on whether you should be or not be because some machines are bound to be Shakespearean.
Image credit: Pixabay
Choose how you want the latest innovation content delivered to you:
Dimis Michaelides is a keynote speaker, author, consultant and trainer in leadership, creativity and innovation. Contact him for a workshop or a presentation at dimis@dimis.org or register for his newsletter at http://www.dimis.org . You can also connect with him on LinkedIn, Facebook and Twitter.
View original post here:
I think, therefore I am said the machine to the stunned humans - Innovation Excellence
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun – ZDNet
Posted: at 9:47 pm
Geoffrey Hinton, center. talks about what future deep learning neural nets may look like, flanked by Yann LeCun of Facebook, right, and Yoshua Bengio of Montreal's MILA institute for AI, during a press conference at the 34th annual AAAI conference on artificial intelligence.
The rise of dedicated chips and systems for artificial intelligence will "make possible a lot of stuff that's not possible now," said Geoffrey Hinton, the University of Toronto professor who is one of the godfathers of the "deep learning" school of artificial intelligence, during a press conference on Monday.
Hinton joined his compatriots, Yann LeCun of Facebook and Yoshua Bengio of Canada's MILA institute, fellow deep learning pioneers, in an upstairs meeting room of the Hilton Hotel on the sidelines of the 34th annual conference on AI by the Association for the Advancement of Artificial Intelligence. They spoke for 45 minutes to a small group of reporters on a variety of topics, including AI ethics and what "common sense" might mean in AI. The night before, all three had presented their latest research directions.
Regarding hardware, Hinton went into an extended explanation of the technical aspects that constrain today's neural networks. The weights of a neural network, for example, have to be used hundreds of times, he pointed out, making frequent, temporary updates to the weights. He said the fact graphics processing units (GPUs) have limited memory for weights and have to constantly store and retrieve them in external DRAM is a limiting factor.
Much larger on-chip memory capacity "will help with things like Transformer, for soft attention," said Hinton, referring to the wildly popular autoregressive neural network developed at Google in 2017. Transformers, which use "key/value" pairs to store and retrieve from memory, could be much larger with a chip that has substantial embedded memory, he said.
Also: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws
LeCun and Bengio agreed, with LeCun noting that GPUs "force us to do batching," where data samples are combined in groups as they pass through a neural network, "which isn't efficient." Another problem is that GPUs assume neural networks are built out of matrix products, which forces constraints on the kind of transformations scientists can build into such networks.
"Also sparse computation, which isn't convenient to run on GPUs ...," said Bengio, referring to instances where most of the data, such as pixel values, may be empty, with only a few significant bits to work on.
LeCun predicted they new hardware would lead to "much bigger neural nets with sparse activations," and he and Bengio both emphasized there is an interest in doing the same amount of work with less energy. LeCun defended AI against claims it is an energy hog, however. "This idea that AI is eating the atmosphere, it's just wrong," he said. "I mean, just compare it to something like raising cows," he continued. "The energy consumed by Facebook annually for each Facebook user is 1,500-watt hours," he said. Not a lot, in his view, compared to other energy-hogging technologies.
The biggest problem with hardware, mused LeCun, is that on the training side of things, it is a duopoly between Nvidia, for GPUs, and Google's Tensor Processing Unit (TPU), repeating a point he had made last year at the International Solid-State Circuits Conference.
Even more interesting than hardware for training, LeCun said, is hardware design for inference. "You now want to run on an augmented reality device, say, and you need a chip that consumes milliwatts of power and runs for an entire day on a battery." LeCun reiterated a statement made a year ago that Facebook is working on various internal hardware projects for AI, including for inference, but he declined to go into details.
Also: Facebook's Yann LeCun says 'internal activity' proceeds on AI chips
Today's neural networks are tiny, Hinton noted, with really big ones having perhaps just ten billion parameters. Progress on hardware might advance AI just by making much bigger nets with an order of magnitude more weights. "There are one trillion synapses in a cubic centimeter of the brain," he noted. "If there is such a thing as General AI, it would probably require one trillion synapses."
As for what "common sense" might look like in a machine, nobody really knows, Bengio maintained. Hinton complained people keep moving the goalposts, such as with natural language models. "We finally did it, and then they said it's not really understanding, and can you figure out the pronoun references in the Winograd Schema Challenge," a question-answering task used a computer language benchmark. "Now we are doing pretty well at that, and they want to find something else" to judge machine learning he said. "It's like trying to argue with a religious person, there's no way you can win."
But, one reporter asked, what's concerning to the public is not so much the lack of evidence of human understanding, but evidence that machines are operating in alien ways, such as the "adversarial examples." Hinton replied that adversarial examples show the behavior of classifiers is not quite right yet. "Although we are able to classify things correctly, the networks are doing it absolutely for the wrong reasons," he said. "Adversarial examples show us that machines are doing things in ways that are different from us."
LeCun pointed out animals can also be fooled just like machines. "You can design a test so it would be right for a human, but it wouldn't work for this other creature," he mused. Hinton concurred, observing "house cats have this same limitation."
Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize
"You have a cat lying on a staircase, and if you bounce a soccer ball down the stairs toward a care, the cat will just sort of watch the ball bounce until it hits the cat in the face."
Another thing that could prove a giant advance for AI, all three agreed, is robotics. "We are at the beginning of a revolution," said Hinton. "It's going to be a big deal" to many applications such as vision. Rather than analyzing the entire contents of a static image or video frame, a robot creates a new "model of perception," he said.
"You're going to look somewhere, and then look somewhere else, so it now becomes a sequential process that involves acts of attention," he explained.
Hinton predicted last year's work by OpenAI in manipulating a Rubik's cube was a watershed moment for robotics, or, rather, an "AlphaGo moment," as he put it, referring to DeepMind's Go computer.
LeCun concurred, saying that Facebook is running AI projects not because Facebook has an extreme interest in robotics, per se, but because it is seen as an "important substrate for advances in AI research."
It wasn't all gee-whiz, the three scientists offered skepticism on some points. While most research in deep learning that matters is done out in the open, some companies boast of AI while keeping the details a secret.
"It's hidden because it's making it seem important," said Bengio, when in fact, a lot of work in the depths of companies may not be groundbreaking. "Sometimes companies make it look a lot more sophisticated than it is."
Bengio continued his role among the three of being much more outspoken on societal issues of AI, such as building ethical systems.
When LeCun was asked about the use of factual recognition algorithms, he noted technology can be used for good and bad purposes, and that a lot depends on the democratic institutions of society. But Bengio pushed back slightly, saying, "What Yann is saying is clearly true, but prominent scientists have a responsibility to speak out." LeCun mused that it's not the job of science to "decide for society," prompting Bengio to respond, "I'm not saying decide, I'm saying we should weigh in because governments in some countries are open to that involvement."
Hinton, who frequently punctuates things with a humorous aside, noted toward the end of the gathering his biggest mistake with respect to Nvidia. "I made a big mistake back in with Nvidia," he said. "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets. I called Nvidia and said I just recommended your GPUs to 1,000 researchers, can you give me a free one, and they said no.
"What I should have done, if I was really smart, was take all my savings and put it into Nvidia stock. The stock was at $20 then, now it's, like, $250."
Visit link: