Archive for the ‘Machine Learning’ Category
When AI in healthcare goes wrong, who is responsible? – Quartz
Posted: September 20, 2020 at 10:56 pm
Artificial intelligence can be used to diagnose cancer, predict suicide, and assist in surgery. In all these cases, studies suggest AI outperforms human doctors in set tasks. But when something does go wrong, who is responsible?
Theres no easy answer, says Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible. This is a big mess, says Lin. Its not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain.
Design includes creation of both hardware and software, plus testing the product. Data encompasses the mass of problems that can occur when machine learning is trained on biased data, while deployment involves how the product is used in practice. AI applications in healthcare often involve robots working with humans, which further blurs the line of responsibility.
Responsibility can be divided according to where and how the AI system failed, says Wendall Wallace, lecturer at Yale Universitys Interdisciplinary Center for Bioethics and the author of several books on robot ethics. If the system fails to perform as designed or does something idiosyncratic, that probably goes back to the corporation that marketed the device, he says. If it hasnt failed, if its being misused in the hospital context, liability would fall on who authorized that usage.
Surgical Inc., the company behind the Da Vinci Surgical system, has settled thousands of lawsuits over the past decade. Da Vinci robots always work in conjunction with a human surgeon, but the company has faced allegations of clear error, including machines burning patients and broken parts of machines falling into patients.
Some cases, though, are less clear-cut. If diagnostic AI trained on data that over-represents white patients then misdiagnoses a Black patient, its unclear whether the culprit is the machine-learning company, those who collected the biased data, or the doctor who chose to listen to the recommendation. If an AI program is a black box, it will make predictions and decisions as humans do, but without being able to communicate its reasons for doing so, writes attorney Yavar Bathaee in a paper outlining why the legal principles that apply to humans dont necessarily work for AI. This also means that little can be inferred about the intent or conduct of the humans that created or deployed the AI, since even they may not be able to foresee what solutions the AI will reach or what decisions it will make.
The difficulty in pinning the blame on machines lies in the impenetrability of the AI decision-making process, according to a paper on tort liability and AI published in the AMA Journal of Ethics last year. For example, if the designers of AI cannot foresee how it will act after it is released in the world, how can they be held tortiously liable?, write the authors. And if the legal system absolves designers from liability because AI actions are unforeseeable, then injured patients may be left with fewer opportunities for redress.
AI, as with all technology, often works very differently in the lab than in a real-world setting. Earlier this year, researchers from Google Health found that a deep-learning system capable of identifying symptoms of diabetic retinopathy with 90% accuracy in the lab caused considerable delays and frustrations when deployed in real life.
Despite the complexities, clear responsibility is essential for artificial intelligence in healthcare, both because individual patients deserve accountability, and because lack of responsibility allows mistakes to flourish. If its unclear whos responsible, that creates a gap, it could be no one is responsible, says Lin. If thats the case, theres no incentive to fix the problem. One potential response, suggested by Georgetown legal scholar David Vladeck, is to hold everyone involved in the use and implementation of the AI system accountable.
AI and healthcare often work well together, with artificial intelligence augmenting the decisions made by human professionals. Even as AI develops, these systems arent expected to replace nurses or automate human doctors entirely. But as AI improves, it gets harder for humans to go against machines decisions. If a robot is right 99% of the time, then a doctor could face serious liability if they make a different choice. Its a lot easier for doctors to go along with what that robot says, says Lin.
Ultimately, this means humans are ceding some authority to robots. There are many instances where AI outperforms humans, and so doctors should defer to machine learning. But patient wariness of AI in healthcare is still justified when theres no clear accountability for mistakes. Medicine is still evolving. Its part art and part science, says Lin. You need both technology and humans to respond effectively.
Read the original:
When AI in healthcare goes wrong, who is responsible? - Quartz
Is Wide-Spread Use of AI & Machine Intelligence in Manufacturing Still Years Away? – Automation World
Posted: at 10:56 pm
According to a new report by PMMI Business Intelligence, artificial intelligence (AI) and machine learning is the area of automation technology with the greatest capacity for expansion. This technology can optimize individual processes and functions of the operation; manage production and maintenance schedules; and, expand and improve the functionality of existing technology such as vision inspection.
While AI is typically aimed at improving operation-wide efficiency, machine learning is directed more toward the actions of individual machines; learning during operation, identifying inefficiencies in areas such as rotation and movement, and then adjusting processes to correct for inefficiencies.
The advantages to be gained through the use of AI and machine learning are significant. One study released by Accenture and Frontier Economics found that by 2035, AI-empowered technology could increase labor productivity by up to 40%, creating an additional $3.8 trillion in direct value added (DVA) to the manufacturing sector.
See it Live at PACK EXPO Connects Nov. 9-13: End-of-Line Automation without Capital Expenditure, by Pearson Packaging Systems. Preview the Showroom Here.
However, only 1% of all manufacturers, both large and small, are currently utilizing some form of AI or machine learning in their operations. Most manufacturers interviewed said that they are trying to gain a better understanding of how to utilize this technology in their operations, and 45% of leading CPGs interviewed predict they will incorporate AI and/or machine learning within ten years.
A plant manager at a private label SME reiterates AI technology is still being explored, stating: We are only now talking about how to use AI and predict it will impact nearly half of our lines in the next 10 years.
While CPGs forecast that machine learning will gain momentum in the next decade, the near-future applications are likely to come in vision and inspection systems. Manufacturers can utilize both AI and machine learning in tandem, such as deploying sensors to key areas of the operation to gather continuous, real-time data on efficiency, which can then be analyzed by an AI program to identify potential tweaks and adjustments to improve the overall process.
See it Live at PACK EXPO Connects Nov. 9-13: Reduce costs and improve product quality in adhesive application of primary packaging, by Robatech USA Inc. Preview the Showroom Here.
And, the report states, that while these may appear to be expensive investments best left for the future, these technologies are increasingly affordable and offer solutions that can bring measurable efficiencies to smart manufacturing. In the days of COVID-19, gains to labor productivity and operational efficiency may be even more timely.
To access this FREE report and learn more about automation in operations, download below.
Source: PMMI Business Intelligence, Automation Timeline: The Drive Toward 4.0 Connectivity in Packaging and Processing
PACK EXPO Connects November 9-13. Now more than ever, packaging and processing professionals need solutions for a rapidly changing world, and the power of the PACK EXPO brand delivers the decision makers you need to reach. Attendeeregistrationis open now.
Read more here:
How do we know AI is ready to be in the wild? Maybe a critic is needed – ZDNet
Posted: at 10:56 pm
Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent.
Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy?
"This is a really good question, and one we are actively working on, "Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week.
Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially. The approach is known as conservative Q-Learning, and it was described in a paper posted on the arXiv preprint server last month.
ZDNet reached out to Levine this week after he posted an essay on Medium describing the problem of how to safely train AI systems to make real-world decisions.
Levine has spent years at Berkeley's robotic artificial intelligence and learning lab developing AI software that to direct how a robotic arm moves within carefully designed experiments-- carefully designed because you don't want something to get out of control when a robotic arm can do actual, physical damage.
Robotics often relies on a form of machine learning called reinforcement learning. Reinforcement learning algorithms are trained by testing the effect of decisions and continually revising a policy of action depending on how well the action affects the state of affairs.
But there's the danger: Do you want a self-driving car to be learning on the road, in real traffic?
In his Medium post, Levine proposes developing "offline" versions of RL. In the offline world, RL could be trained using vast amounts of data, like any conventional supervised learning AI system, to refine the system before it is ever sent out into the world to make decisions.
Also: A Berkeley mash-up of AI approaches promises continuous learning
"An autonomous vehicle could be trained on millions of videos depicting real-world driving," he writes. "An HVAC controller could be trained using logged data from every single building in which that HVAC system was ever deployed."
To boost the value of reinforcement learning, Levine proposes moving from the strictly "online" scenario, exemplified by the diagram on the right, to an "offline" period of training, whereby algorithms are input with masses of labeled data more like traditional supervised machine learning.
Levine uses the analogy of childhood development. Children receive many more signals from the environment than just the immediate results of actions.
"In the first few years of your life, your brain processed a broad array of sights, sounds, smells, and motor commands that rival the size and diversity of the largest datasets used in machine learning," Levine writes.
Which comes back to the original question, to wit, after all that offline development, how does one know when an RL program is sufficiently refined to go "online," to be used in the real world?
That's where conservative Q-learning comes in. Conservative Q-learning builds on the widely studied Q-learning, which is itself a form of reinforcement learning. The idea is to "provide theoretical guarantees on the performance of policies learned via offline RL," Levine explained to ZDNet. Those guarantees will block the RL system from carrying out bad decisions.
Imagine you had a long, long history kept in persistent memory of what actions are good actions that prevent chaos. And imagine your AI algorithm had to develop decisions that didn't violate that long collective memory.
"This seems like a promising path for us toward methods with safety and reliability guarantees in offline RL," says UC Berkeley assistant professor Sergey Levine, of the work he and colleagues are doing with "conservative Q-learning."
In a typical RL system, a value function is computed based on how much a certain choice of action will contribute to reaching a goal. That informs a policy of actions.
In the conservative version, the value function places a higher value on that past data in persistent memory about what should be done. In technical terms, everything a policy wants to do is discounted, so that there's an extra burden of proof to say that the policy has achieved its optimal state.
A struggle ensues, Levine told ZDNet, making an analogy to generative adversarial networks, or GANs, a type of machine learning.
"The value function (critic) 'fights' the policy (actor), trying to assign the actor low values, but assign the data high values." The interplay of the two functions makes the critic better and better at vetoing bad choices. "The actor tries to maximize the critic," is how Levine puts it.
Through the struggle, a consensus emerges within the program. "The result is that the actor only does those things for which the critic 'can't deny' that they are good (because there is too much data that supports the goodness of those actions)."
Also: MIT finally gives a name to the sum of all AI fears
There are still some major areas that need refinement, Levine told ZDNet. The program at the moment has some hyperparameters that have to be designed by hand rather than being arrived at from the data, he noted.
"But so far this seems like a promising path for us toward methods with safety and reliability guarantees in offline RL," said Levine.
In fact, conservative Q-learning suggests there are ways to incorporate practical considerations into the design of AI from the start, rather than waiting till after such systems are built and deployed.
Also: To Catch a Fake: Machine learning sniffs out its own machine-written propaganda
The fact that it is Levine carrying out this inquiry should give the approach of conservative Q-learning added significance. With a firm grounding in real-world applications of robotics, Levine and his team are in a position to validate the actor-critic in direct experiments.
Indeed, the conservative Q-Learning paper, which is lead-authored by Aviral Kumar of Berkeley, and was done with the collaboration of Google Brain, contains numerous examples of robotics tests in which the approach showed improvements over other kinds of offline RL.
There is also a blog post authored by Google if you want to learn more about the effort.
Of course, any system that relies on amassed data offline for its development will be relying on the integrity of that data. A successful critique of the kind Levine envisions will necessarily involve broader questions about where that data comes from, and what parts of it represent good decisions.
Some aspects of what is good and bad may be a discussion society has to have that cannot be automated.
Excerpt from:
How do we know AI is ready to be in the wild? Maybe a critic is needed - ZDNet
Solving the crux behind Apple’s Silicon Strategy – Medium
Posted: at 10:56 pm
In its latest keynote address headed by CEO Tim Cook, Apple its new A14 bionic chip, a 5 nm ARM based chipset.
This System on a Chip (SoC) from Apple is expected to power iPhone 12 and iPad Air (2020) models. The chipset integrates around 11.8 billion transistors.
For over a decade, Apples world-class silicon design team has been building and refining Apple SoCs. Using these designs Apple has been able to develop the latest iPhone, iPad and Apple Watch that are the industry leaders in terms of class and performance. In June of 2020, Apple announced that it will transition the Mac to its custom silicon to offer better technological performance.
Now, Apple Silicon is basically a processor made in-house akin to what is powering the iPhone and iPad family of devices. This ARM move will result in ditching their reliance on Intel chipsets for Future Macs. This transition to silicon will also establish a common architecture across all Apple products, making it far easier for developers to write and optimize their apps for the entire ecosystem. In fact, developers can now start focusing on updating their applications to take advantage of the enhanced capabilities of the Apple silicon.
Along with this Apple also introduced mac0S Big Sur earlier this year, which will be the next major macOS release (version 11.0) and includes technologies that will facilitate a smooth transition to the Apple silicon experience. This will be the first time where developers will be able to make their iOS and iPad OS apps available on the Mac without modifications. The Apple silicon powered Macs will offer industry leading performance per watt and higher performance GPUs. To help developers get accustomed to the new transition, Apple is also launching the Universal App QuickStart Program to guide developers through the entire transition.
Apple plans to ship the new Mac by the end of the year and complete the transition in about two years. This being said Apple will continue to release new versions for Intel-based Mac for years to come.
Apple has been explicit about how serious they are about machine learning-based SoC. Apple A14 includes second-generation machine learning accelerators in the CPU for 10 times faster machine learning calculations. The combination of the new Neural Engine, machine learning accelerators, advanced power management, unified memory architecture and the Apple high-performance GPU enables powerful on-device experiences for image recognition, natural language learning, analysing motion, and maybe a machine learning enabled GPS!
According to a recent patent application by Apple , they have been working on a technology that implements a system for estimating the device location based on a global positioning system consisting of a Global Navigation Satellite System (GNSS) satellite, and receives a set of parameters associated with the estimated position. The processor is further configured to apply the set of parameters and the estimated position to a machine learning model that has been trained on a position relative to the satellite. The estimated position and output of the model is then provided to a Kalman filter for more accurate location.
This technology may be significantly better than what a mobile device alone can perform in most non-aided mode(s) of operation. Apples patent to improve GPS in the upcoming 5G era might give them an advantage over existing resources.
Apples move to its own ARM chips comes just as the company unveils macOS version 11.0 (Big Sur). That means ARM based Mac computers will continue to run macOS instead of switching to iOS 14, similar to the approach taken with existing Windows laptops that use Qualcomm ARM based processors. Apple apparently has its hardware and software team working together, given that they have found a way for all their applications functioning seamless from day one of the launch, through Rosetta 2 acting as an emulator and a translator that will allow Intel-made apps to run on Silicon-powered devices.
Moreover, the Apple ecosystem acts as the catalyst for innovation in the company and is not limited to the hardware and software products, but also around its services.
Putting a foot forward in that direction is the Apple One Subscription.
Apple with its calm dignity, diligent market study and unflinching courage to innovate has taken its own time to come up with their strategic silicon move. Apple stayed focused on its long term goals instead of following the hype, trends and gimmicks set out by its competitors to gain customer attention. This ability to think differently is a driving force behind their success.
And owing to the current state of affairs Apple has played it relatively safe this year, sticking to their core offerings. We can expect an exciting iPhone, iMac and MacOS launch later this year.
Lets gear up for another round of innovation sponsored by Apple.
Continued here:
Boost Your Animation To 60 FPS Using AI – Hackaday
Posted: at 10:56 pm
The uses of artificial intelligence and machine learning continue to expand, with one of the more recent implementations being video processing. A new method can fill in frames to smooth out the appearance of the video, which [LegoEddy] was able to use this in one of his animated LEGO movies with some astonishing results.
His original animation of LEGO figures and sets was created at 15 frames per second. As an animator, he notes that its orders of magnitude more difficult to get more frames than this with traditional methods, at least in his studio. This is where the artificial intelligence comes in. The program is able to interpolate between frames and create more frames to fill the spaces between the original. This allowed [LegoEddy] to increase his frame rate from 15 fps to 60 fps without having to actually create the additional frames.
While weve seen AI create art before, the improvement on traditionally produced video is a dramatic advancement. Especially since the AI is aware of depth and preserves information about the distance of objects from the camera. The software is also free, runs on any computer with an appropriate graphics card, and is available on GitHub.
Thanks to [BaldPower] for the tip!
See the original post:
50 Latest Data Science And Analytics Jobs That Opened Last Week – Analytics India Magazine
Posted: at 10:56 pm
Despite the pandemic, data scientists remain to be one of the most in-demand jobs. Here we list down 50 latest job openings for data science and analyst positions in cities such as Bangalore, Mumbai, Hyderabad, Pune and more, from last week.
(The jobs are sorted according to the years of experience required).
Location: Hyderabad
Skills Required: Machine learning and statistical models, big data processing technologies such as Hadoop, Hive, Pig and Spark, SQL, etc.
Apply here.
Location: Bangalore
Skills Required: Mathematical modelling using biological datasets, statistical and advanced data analytics preferably using R, Python and/or JMP, hands-on experience in data modelling, data analysis and visualisation, database systems like Postgres, MySQL, SQLServer, etc.
Apply here.
Location: Bangalore
Skills Required: Quantitative analytics or data modelling, predictive modelling, machine learning, clustering and classification techniques, Python, C, C++, Java, SQL, Big Data frameworks and visualisation tools like Cassandra, Hadoop, Spark, Tableau, etc.
Apply here.
Location: Bangalore
Skills Required: Advanced analytics, machine learning, AI techniques, cloud-based Big Data technology, Python, R, SQL, etc.
Apply here.
Location: Thiruvananthapuram, Kerala
Skills Required: Data mining techniques, statistical analysis, building high-quality prediction systems, etc.
Apply here.
Location: Bangalore
Skills Required: Advanced ML, DL, AI, and mathematical modelling and optimisation techniques, Python, NLP, TensorFlow, PyTorch, Keras, etc.
Apply here.
Location: Bangalore
Skills Required: Java, Python, R, C++, machine learning, data mining, mathematical optimisation, simulations, experience in e-commerce or supply chain, computational, programming, data management skills, etc.
Apply here.
Location: Bangalore
Skills Required: Statistics, Machine Learning, programming skills in various languages such as R, Python, etc., NLP, Matlab, linear algebra, optimisation, probability theory, etc.
Apply here.
Location: Bangalore
Skills Required: Knowledge of industry trends, R&D areas and computationally intensive processes (e.g. optimisation), Qiskit, classical approaches to machine learning, etc.
Apply here.
Location: Bangalore
Skills Required: Java, C++, Python, natural language processing systems, C/C++, Java, Perl or Python, statistical language modelling, etc.
Apply here.
Location: Khed, Maharashtra
Skills Required: Statistical computer languages like R, Python, SQL, machine learning techniques, advanced statistical techniques and concepts, etc.
Apply here.
Location: Bangalore
Skills Required: Foundational algorithms in either machine learning, computer vision or deep learning, NLP, Python, etc.
Apply here.
Location: Hyderabad
Skills Required: SQL CQL, MQL, Hive, NoSQL database concepts & applications, data modelling techniques (3NF, Dimensional), Python or R or Java, statistical models and machine learning algorithms, etc.
Apply here.
Location: Anekal, Karnataka
Skills Required: Machine Learning, deep learning-based techniques, OpenCV, DLib, Computer Vision techniques, TensorFlow, Caffe, Pytorch, Keras, MXNet, Theano, etc.
Apply here.
Location: Vadodara, Gujarat
Skills Required: Large and complex data assets, design and build explorative, predictive- or prescriptive models, Python, Spark, SQL, etc.
Apply here.
Location: Remote
Skills Required: Machine Learning & AI, data science Python, R, design and develop training programs, etc.
Apply here.
Location: Bangalore
Skills Required: Integrating applications and platforms with cloud technologies (i.e. AWS), GPU acceleration (i.e. CUDA and cuDNN), Docker containers, etc.
Apply here.
Location: Bangalore
Skills Required: ETL developer, SQL or Python developer, Netezza, etc.
Apply here.
Location: Bangalore
Skills Required: Machine learning, analytic consulting, product development, building predictive models, etc.
Apply here.
Location: Hyderabad
Skills Required: Hands-on data science, model building, boutique analytics consulting or captive analytics teams, statistical techniques, etc.
Apply here.
Location: Bangalore
Skills Required: Statistical techniques, statistical analysis tools (e.g., SAS, SPSS, R), statistical analysis tools (e.g., SAS, SPSS, R), etc.
Apply here.
Location: Bangalore
Skills Required: Probability, statistics, machine learning, data mining, artificial intelligence, big data platforms like Hadoop, spark, hive etc
Apply here.
Location: Thiruvananthapuram, Kerala
Skills Required: ML and DL approach, advanced Data/Text Mining/NLP/Computer Vision, Python, MLOps concepts, relational (MySQL) and non-relational / document databases (MongoDB/CouchDB), Microsoft Azure/AWS, etc.
Apply here.
Location: Bangalore
Skills Required: Data structures and algorithms, SQL, regex, HTTP, REST, JSON, XML, Maven, Git, JUnit, IntelliJ IDEA/Eclipse, etc.
Apply here.
Location: Delhi NCR, Bengaluru
Skills Required: Python, R, GA, Clevertap, Power BI, ML/DL algorithms, SQL, Advanced Excel, etc.
Apply here.
Location: Hyderabad
Skills Required: R language, Python, SQL, Power BI, Advance Excel, Geographical Information Systems (GIS), etc.
Apply here.
Location: Bangalore
Skills Required: Python, PySpark, MLib, Spark/Mesos, Hive, Hbase, Impala, OpenCV, NumPy, Matplotlib, SciPy, Google cloud, Azure cloud, AWS, Cloudera, Horton Works, etc.
Apply here.
Location: Mumbai
Skills Required: Programming languages (e.g. R, SAS, SPSS, Python), data visualisation techniques and software tools (e.g. Spotfire, SAS, R, Qlikview, Tableau, HTML5, D3), etc.
Apply here.
Location: Hyderabad
Skills Required: Neural networks, Python, data science, Pandas, SQL, Azure with Spark/Hadoop, etc.
Apply here.
Location: Bangalore
Skills Required: Strong statistical knowledge, statistical tools and techniques, Python, R, machine learning, etc.
Apply here.
Location: Bangalore
Skills Required: R or Python knowledge (Python+DS libraries, version control, etc.), ETL in SQL, Google/AWS platform, etc.
Apply here.
Location: Bangalore
Skills Required: R, Python, SLQ, working with and creating data architectures, machine learning techniques, advanced statistical techniques, C, C++, Java, JavaScript, Redshift, S3, Spark, DigitalOcean, etc.
Apply here.
Location: Bangalore
Skills Required: Data-gathering, pre-processing data, model building, coding languages, including Python and Pyspark, big data technology stack, etc.
View original post here:
50 Latest Data Science And Analytics Jobs That Opened Last Week - Analytics India Magazine
Algorithms may never really figure us out thank goodness – The Boston Globe
Posted: at 10:56 pm
An unlikely scandal engulfed the British government last month. After COVID-19 forced the government to cancel the A-level exams that help determine university admission, the British education regulator used an algorithm to predict what score each student would have received on their exam. The algorithm relied in part on how the schools students had historically fared on the exam. Schools with richer children tended to have better track records, so the algorithm gave affluent students even those on track for the same grades as poor students much higher predicted scores. High-achieving, low-income pupils whose schools had not previously performed well were hit particularly hard. After threats of legal action and widespread demonstrations, the government backed down and scrapped the algorithmic grading process entirely. This wasnt an isolated incident: In the United States, similar issues plagued the International Baccalaureate exam, which used an opaque artificial intelligence system to set students' scores, prompting protests from thousands of students and parents.
These episodes highlight some of the pitfalls of algorithmic decision-making. As technology advances, companies, governments, and other organizations are increasingly relying on algorithms to predict important social outcomes, using them to allocate jobs, forecast crime, and even try to prevent child abuse. These technologies promise to increase efficiency, enable more targeted policy interventions, and eliminate human imperfections from decision-making processes. But critics worry that opaque machine learning systems will in fact reflect and further perpetuate shortcomings in how organizations typically function including by entrenching the racial, class, and gender biases of the societies that develop these systems. When courts and parole boards have used algorithms to forecast criminal behavior, for example, they have inaccurately identified Black defendants as future criminals more often than their white counterparts. Predictive policing systems, meanwhile, have led the police to unfairly target neighborhoods with a high proportion of non-white people, regardless of the true crime rate in those areas. Companies that have used recruitment algorithms have found that they amplify bias against women.
But there is an even more basic concern about algorithmic decision-making. Even in the absence of systematic class or racial bias, what if algorithms struggle to make even remotely accurate predictions about the trajectories of individuals' lives? That concern gains new support in a recent paper published in the Proceedings of the National Academy of Sciences. The paper describes a challenge, organized by a group of sociologists at Princeton University, involving 160 research teams from universities across the country and hundreds of researchers in total, including one of the authors of this article. These teams were tasked with analyzing data from the Fragile Families and Child Wellbeing Study, an ongoing study that measures various life outcomes for thousands of families who gave birth to children in large American cities around 2000. It is one of the richest data sets available to researchers: It tracks thousands of families over time, and has been used in more than 750 scientific papers.
The task for the teams was simple. They were given access to almost all of this data and asked to predict several important life outcomes for a sample of families. Those outcomes included the childs grade point average, their grit (a commonly used measure of passion and perseverance), whether the household would be evicted, the material hardship of the household, and whether the parent would lose their job.
The teams could draw on almost 13,000 predictor variables for each family, covering areas such as education, employment, income, family relationships, environmental factors, and child health and development. The researchers were also given access to the outcomes for half of the sample, and they could use this data to hone advanced machine-learning algorithms to predict each of the outcomes for the other half of the sample, which the organizers withheld. At the end of the challenge, the organizers scored the 160 submissions based on how well the algorithms predicted what actually happened in these peoples lives.
The results were disappointing. Even the best performing prediction models were only marginally better than random guesses. The models were rarely able to predict a students GPA, for example, and they were even worse at predicting whether a family would get evicted, experience unemployment, or face material hardship. And the models gave almost no insight into how resilient a child would become.
In other words, even having access to incredibly detailed data and modern machine learning methods designed for prediction did not enable the researchers to make accurate forecasts. The results of the Fragile Families Challenge, the authors conclude, with notable understatement, raise questions about the absolute level of predictive performance that is possible for some life outcomes, even with a rich data set.
Of course, machine learning systems may be much more accurate in other domains; this paper studied the predictability of life outcomes in only one setting. But the failure to make accurate predictions cannot be blamed on the failings of any particular analyst or method. Hundreds of researchers attempted the challenge, using a wide range of statistical techniques, and they all failed.
These findings suggest that we should doubt that big data can ever perfectly predict human behavior and that policymakers working in criminal justice policy and child-protective services should be especially cautious. Even with detailed data and sophisticated prediction techniques, there may be fundamental limitations on researchers' ability to make accurate predictions. Human behavior is inherently unpredictable, social systems are complex, and the actions of individuals often defy expectations.
And yet disappointing as this may be for technocrats and data scientists, it also suggests something reassuring about human potential. If life outcomes are not firmly pre-determined if an algorithm, given a set of past data points, cannot predict a persons trajectory then the algorithms limitations ultimately reflect the richness of humanitys possibilities.
Bryan Schonfeld and Sam Winter-Levy are PhD candidates in politics at Princeton University.
Read more here:
Algorithms may never really figure us out thank goodness - The Boston Globe
Why Deep Learning DevCon Comes At The Right Time – Analytics India Magazine
Posted: at 10:56 pm
The Association of Data Scientists (ADaSci) recently announced Deep Learning DEVCON or DLDC 2020, a two-day virtual conference that aims to bring machine learning and deep learning practitioners and experts from the industry on a single platform to share and discuss recent developments in the field.
Scheduled for 29th and 30th October, the conference comes at a time when deep learning, a subset of machine learning, has become one of the most advancing technologies in the world. From being used in the fields of natural language processing to making self-driving cars, it has come a long way. As a matter of fact, reports suggest that by 2024, the deep learning market is expected to grow at a CAGR of 25%. Thus, it can easily be established that the advancements in the field of deep learning have just initiated and got a long road ahead.
Also Read: Top 7 Upcoming Deep Learning Conferences To Watch Out For
Being a crucial subset of artificial intelligence and machine learning, the advancements in deep learning have increased over the last few years. Thus, it has been explored in various industries, starting from healthcare and eCommerce to advertising and finance, by many leading firms as well as startups across the globe.
While companies like Waymo and Google are using deep learning for their self-driving vehicles, Apple is using the technology for its voice assistant Siri. Alongside many are using deep learning automatic text generation, handwriting recognition, relevant caption generation, image colourisation, predicting earthquakes as well as for detecting brain cancers.
In recent news, Microsoft has introduced new advancements in their deep learning optimisation library DeepSpeed to enable next-gen AI capabilities at scale. It can now be used to train language models with one trillion parameters with fewer GPUs.
With that being said, in future, it is expected to see an increased adoption machine translation, customer experience, content creation, image data augmentation, 3D printing and more. A lot of it could be attributed to the significant advancements in hardware space as well as the democratisation of technology, which helped the field in gaining traction.
Also Read: Free Online Resources To Get Hands-On Deep Learning
Many researchers and scientists across the globe have been working with deep learning technology to leverage it in fighting the deadly pandemic COVID-19. In fact, in recent news, some researchers have proposed deep learning-based automated CT image analysis tools that can differentiate COVID patients from the ones which arent infected. In another research, scientists have proposed a fully automatic deep learning system for diagnosing the disease as well as prognostic analysis. Many are also using deep neural networks for analysing X-ray images to diagnose COVID-19 among patients.
Along with these, startups like Zeotap, SilverSparro and Brainalyzed are leveraging the technology to either drive growth in customer intelligence or power industrial automation and AI solutions. With such solutions, these startups are making deep learning technology more accessible to enterprises and individuals.
Also Read: 3 Common Challenges That Deep Learning Faces In Medical Imaging
Companies like Shell, Lenskart, Snaphunt, Baker Hughes, McAfee, Lowes, L&T and Microsoft are looking for data scientists who are equipped with deep learning knowledge. With significant advancements in this field, it has now become the hottest skill that companies are looking for in their data scientists.
Consequently looking at these requirements, many edtech companies have started coming up with free online resources as well as paid certification on deep learning to provide industry-relevant knowledge to enthusiasts and professionals. These courses and accreditation, in turn, bridges the major talent gap that emerging technologies typically face during its maturation.
Also Read: How To Switch Careers To Deep Learning
With such major advancements in the field and its increasing use cases, the area of deep learning has witnessed an upsurge in popularity as well as demand. Thus it is critical, now more than ever, to understand this complex subject in-depth for better research purposes and application. For that matter, one needs to have a thorough understanding of the field to build a career in this ever-evolving field.
And, for this reason, the Deep Learning DEVCON couldnt have come at a better time than this. Not only it will help amateurs as well as professionals to get a better understanding of the field but will also provide them opportunities to network with leading developers and experts of the field.
Further, the talks and the workshops included in the event will provide a hands-on experience for deep learning practitioners on various tools and techniques. Starting with machine learning vs deep learning, followed by feed-forward neural networks and deep neural networks, the workshops will cover topics like GANs, recurrent neural networks, sequence modelling, Autoencoders, and real-time object detection. The two-day workshop will also provide an overview of deep learning as a broad topic, which will further be accredited with a certificate for all the attendees of the workshop.
The workshops will help participants have a strong understanding of deep learning, from basics to advanced, along with in-depth knowledge of artificial neural networks. With that, it will also clear concepts about tuning, regularising and improving the models as well as an understanding of various building blocks with their practical implementations. Alongside, it will also provide practical knowledge of applying deep learning in computer vision and NLP.
Considering the conference is virtual, it will also provide convenience for participants to join the talks and workshops from the comfort of their homes. Thus, a perfect opportunity to get a first-hand experience into the complex world of deep learning along with leading experts and best minds of the field, who will share their relevant experience to encourage enthusiasts and amateurs.
To register for Deep Learning DevCon 2020, visit here.
comments
Read this article:
Why Deep Learning DevCon Comes At The Right Time - Analytics India Magazine
Six notable benefits of AI in finance, and what they mean for humans – Daily Maverick
Posted: at 10:55 pm
Addressing AI anxiety
A common narrative around emerging technologies like AI, machine learning, and robotic process automation is the anxiety and fear that theyll replace humans. In South Africa, with an unemployment rate of over 30%, these concerns are valid.
But if we dig deep into what we can do with AI, we learn it will elevate the work that humans do, making it more valuable than ever.
Sage research found that most senior financial decision-makers (90%) are comfortable with automation performing more of their day-to-day accounting tasks in the future, and 40% believe that AI and machine learning (ML) will improve forecasting and financial planning.
Whats more, two-thirds of respondents expect emerging technology to audit results continuously and to automate period-end reporting and corporate audits, reducing time to close in the process.
The key to realising these benefits is to secure buy-in from the entire organisation. With 87% of CFOs now playing a hands-on role in digital transformation, their perspective on technology is key to creating a digitally receptive team culture. And their leadership is vital in ensuring their organisations maximise their technology investments. Until employees make the same mindset shift as CFOs have, theyll need to be guided and reassured about the businesss automation strategy and the potential for upskilling.
Six benefits of AI in laymans terms
Speaking during an exclusive virtual event to announce the results of the CFO 3.0 research, as well as the launch of Sage Intacct in South Africa, Aaron Harris, CTO for the Sage, said one reason for the misperception about AIs impact on business and labour is that SaaS companies too often speak in technical jargon.
We talk about AI and machine learning as if theyre these magical capabilities, but we dont actually explain what they do and what problems they solve. We dont put it into terms that matter for business leaders and labour. We dont do a good job as an industry, explaining that machine learning isnt an outcome we should be looking to achieve its the technology that enables business outcomes, like efficiency gains and smarter predictive analytics.
For Harris, AI has remarkable benefits in six key areas:
Digital culture champions
Evolving from a traditional management style that relied on intuition, to a more contemporary one based on data-driven evidence, can be a culturally disruptive process. Interestingly, driving a cultural change wasnt a concern for most South African CFOs, with 73% saying their organisations are ready for more automation.
In fact, AI holds no fear for senior financial decision-makers: over two-thirds are not at all concerned about it, and only one in 10 believe that it will take away jobs.
So, how can businesses reimagine the work of humans when software bots are taking care of all the repetitive work?
How can we leverage the unique skills of humans, like collaboration, contextual understanding, and empathy?
The future world is a world of connections, says Harris. It will be about connecting humans in ways that allow them to work at a higher level. It will be about connecting businesses across their ecosystems so that they can implement digital business models to effectively and competitively operate in their markets. And it will be about creating connections across technology so that traditional, monolithic experiences are replaced with modern ones that reflect new ways of working and that are tailored to how individuals and humans will be most effective in this world.
New world of work
We can envision this world across three areas:
Sharing knowledge and timelines on strategic developments and explaining the significance of these changes will help CFOs to alleviate the fear of the unknown.
Technology may be the enabler driving this change, but how it transforms a business lies with those who are bold enough to take the lead. DM
Visit link:
Six notable benefits of AI in finance, and what they mean for humans - Daily Maverick
Twitter is looking into why its photo preview appears to favor white faces over Black faces – The Verge
Posted: at 10:55 pm
Twitter it was looking into why the neural network it uses to generate photo previews apparently chooses to show white peoples faces more frequently than Black faces.
Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a Black persons face and a white persons face. Twitters preview showed the white faces more often.
The informal testing began after a Twitter user tried to post about a problem he noticed in Zooms facial recognition, which was not showing the face of a Black colleague on calls. When he posted to Twitter, he noticed it too was favoring his white face over his Black colleagues face.
Users discovered the preview algorithm chose non-Black cartoon characters as well.
When Twitter first began using the neural network to automatically crop photo previews, machine learning researchers explained in a blog post how they started with facial recognition to crop images, but found it lacking, mainly because not all images have faces:
Previously, we used face detection to focus the view on the most prominent face we could find. While this is not an unreasonable heuristic, the approach has obvious limitations since not all images contain faces. Additionally, our face detector often missed faces and sometimes mistakenly detected faces when there were none. If no faces were found, we would focus the view on the center of the image. This could lead to awkwardly cropped preview images.
Twitter chief design officer Dantley Davis tweeted that the company was investigating the neural network, as he conducted some unscientific experiments with images:
Liz Kelley of the Twitter communications team tweeted Sunday that the company had tested for bias but hadnt found evidence of racial or gender bias in its testing. Its clear that weve got more analysis to do, Kelley tweeted. Well open source our work so others can review and replicate.
Twitter chief technology officer Parag Agrawal tweeted that the model needed continuous improvement, adding he was eager to learn from the experiments.
See the rest here: