Page 31«..1020..30313233..»

Archive for the ‘Machine Learning’ Category

Combating the coronavirus with Twitter, data mining, and machine learning – TechRepublic

Posted: February 4, 2020 at 9:52 am


without comments

Social media can send up an early warning sign of illness, and data analysis can predict how it will spread.

The coronavirus illness (nCoV) is now an international public health emergency, bigger than the SARS outbreak of 2003. Unlike SARS, this time around scientists have better genome sequencing, machine learning, and predictive analysis tools to understand and monitor the outbreak.

During the SARS outbreak, it took five months for scientists to sequence the virus's genome. However, the first 2019-nCoV case was reported in December, and scientists had the genome sequenced by January 10, only a month later.

Researchers have been using mapping tools to track the spread of disease for several years. Ten European countries started Influenza Net in 2003 to track flu symptoms as reported by individuals, and the American version, Flu Near You, started a similar service in 2011.

Lauren Gardner, a civil engineering professor at Johns Hopkins and the co-director of the Center for Systems Science and Engineering, led the effort to launch a real-time map of the spread of the 2019-nCoV. The site displays statistics about deaths and confirmed cases of coronavirus on a worldwide map.

Este Geraghty, MD, MS, MPH, GISP, and chief medical officer and health solutions director at Esri, said that since the SARS outbreak in 2003 there has been a revolution in applied geography through web-based tools.

"Now as we deploy these tools to protect human lives, we can ingest real-time data and display results in interactive dashboards like the coronavirus dashboard built by Johns Hopkins University using ArcGIS," she said.

SEE:The top 10 languages for machine learning hosted on GitHub (free PDF)

With this outbreak, scientists have another source of data that did not exist in 2003: Twitter and Facebook. In 2014, Chicago's Department of Innovation and Technology built an algorithm that used social media mining and illness prediction technologies to target restaurants inspections. It worked: The algorithm found violations about 7.5 days before the normal inspection routine did.

Theresa Do, MPH, leader of the Federal Healthcare Advisory and Solutions team at SAS, said that social media can be used as an early indicator that something is going on.

"When you're thinking on a world stage, a lot of times they don't have a lot of these technological advances, but what they do have is cell phones, so they may be tweeting out 'My whole village is sick, something's going on here,' she said.

Do said an analysis of social media posts can be combined with other data sources to predict who is most likely to develop illnesses like the coronavirus illness.

"You can use social media as a source but then validate it against other data sources," she said. "It's not always generalizable (is generalizable a word?), but it can be a sentinel source."

Do said predictive analytics has made significant advances since 2003, including refining the ability to combine multiple data sources. For example, algorithms can look at names on plane tickets and compare that information with data from other sources to predict who has been traveling to certain areas.

"Algorithms can allow you to say 'with some likelihood' it's likely to be the same person," she said.

The current challenge is identifying gaps in the data. She said that researchers have to balance between the need for real-time data and privacy concerns.

"If you think about the different smartwatches that people wear, you can tell if people are active or not and use that as part of your model, but people aren't always willing to share that because then you can track where someone is at all times," she said.

Do said that the coronavirus outbreak resembles the SARS outbreak, but that governments are sharing data more openly this time.

"We may be getting a lot more positives than they're revealing and that plays a role in how we build the models," she said. "A country doesn't want to be looked at as having the most cases but that is how you save lives."

Get expert tips on mastering the fundamentals of big data analytics, and keep up with the latest developments in artificial intelligence. Delivered Mondays

This map from Johns Hopkins shows reported cases of 2019-nCoV as of January 30, 2020 at 9:30 pm. The yellow line in the graph is cases outside of China while the orange line shows reported cases inside the country.

Image: 2019-nCoV Global Cases by Johns Hopkins Center for Systems Science and Engineering

Go here to read the rest:

Combating the coronavirus with Twitter, data mining, and machine learning - TechRepublic

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak – Machine Learning Times – machine learning & data science news – The…

Posted: at 9:52 am


without comments

By: Casey Ross, National Technology Correspondent, StatNews.com

Surveillance data collected by healthmap.org show confirmed cases of the new coronavirus in China.

Artificial intelligence is not going to stop the new coronavirus or replace the role of expert epidemiologists. But for the first time in a global outbreak, it is becoming a useful tool in efforts to monitor and respond to the crisis, according to health data specialists.

In prior outbreaks, AI offered limited value, because of a shortage of data needed to provide updates quickly. But in recent days, millions of posts about coronavirus on social media and news sites are allowing algorithms to generate near-real-time information for public health officials tracking its spread.

The field has evolved dramatically, said John Brownstein, a computational epidemiologist at Boston Childrens Hospital who operates a public health surveillance site called healthmap.org that uses AI to analyze data from government reports, social media, news sites, and other sources.

During SARS, there was not a huge amount of information coming out of China, he said, referring to a 2003 outbreak of an earlier coronavirus that emerged from China, infecting more than 8,000 people and killing nearly 800. Now, were constantly mining news and social media.

Brownstein stressed that his AI is not meant to replace the information-gathering work of public health leaders, but to supplement their efforts by compiling and filtering information to help them make decisions in rapidly changing situations.

We use machine learning to scrape all the information, classify it, tag it, and filter it and then that information gets pushed to our colleagues at WHO that are looking at this information all day and making assessments, Brownstein said. There is still the challenge of parsing whether some of that information is meaningful or not.

These AI surveillance tools have been available in public health for more than a decade, but the recent advances in machine learning, combined with greater data availability, are making them much more powerful. They are also enabling uses that stretch beyond baseline surveillance, to help officials more accurately predict how far and how fast outbreaks will spread, and which types of people are most likely to be affected.

Machine learning is very good at identifying patterns in the data, such as risk factors that might identify zip codes or cohorts of people that are connected to the virus, said Don Woodlock, a vice president at InterSystems, a global vendor of electronic health records that is helping providers in China analyze data on coronavirus patients.

To continue reading this article click here.

See the rest here:

In Coronavirus Response, AI is Becoming a Useful Tool in a Global Outbreak - Machine Learning Times - machine learning & data science news - The...

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

Top Machine Learning Services in the Cloud – Datamation

Posted: at 9:52 am


without comments

Machine Learning services in the cloud are a critical area of the modern computing landscape, providing a way for organizations to better analyze data and derive new insights. Accessing these service via the cloud tends to be efficient in terms of cost and staff hours.

Machine Learning (often abbreviated as ML) is a subset of Artificial Intelligence (AI) and attempts to 'learn' from data sets in several different ways, including both supervised and unsupervised learning. There are many different technologies that can be used for machine learning, with a variety of commercial tools as well as open source framework.s

While organizations can choose to deploy machine learning frameworks on premises, it is typically a complex and resource intensive exercise. Machine Learning benefits from specialized hardware including inference chips and optimized GPUs. Machine Learning frameworks can also often be challenging to deploy and configure properly. Complexity has led to the rise of Machine Learning services in the cloud, that provide the right hardware and optimally configured software to that enable organizations to easily get started with Machine Learning.

There are several key features that are part of most machine learning cloud services.

AutoML - The automated Machine Learning feature automatically helps to build the right model. Machine Learning Studio - The studio concept is all about providing a developer environment where machine learning models and data modelling scenarios can be built. Open source framework support - The ability to support an existing framework such as TensorFlow, MXNet and Caffe is important as it helps to enable model portability.

When evaluating the different options for machine learning services in the cloud, consider the following criteria:

In this Datamation top companies list, we spotlight the vendors that offer the top machine learning services in the cloud.

Value proposition for potential buyers: Alibaba is a great option for users that have machine learning needs where data sets reside around the world and especially in Asia, where Alibaba is a leading cloud service.

Value proposition for potential buyers: Amazon Web Services has the broadest array of machine learning services in the cloud today, leading with its SageMaker portfolio that includes capabilities for building, training and deploying models in the cloud.

Value proposition for potential buyers: Google's set of Machine Learning services are also expansive and growing, with both generic as well as purpose built services for specific use-cases.

Value proposition for potential buyers: IBM Watson Machine learning enables users to run models on any cloud, or just on the the IBM Cloud

Value proposition for potential buyers: For organizations that have already bought into Microsoft Azure cloud, Azure Machine Learning is good fit, providing a cloud environment to train, deploy and manage machine learning models.

Value proposition for potential buyers: Oracle Machine learning is a useful tools for organizations already using Oracle Cloud applications, to help build data mining notebooks.

Value proposition for potential buyers: Salesforce Einstein is a purpose built machine learning platform that is tightly integrated with the Salesforce platform.

More:

Top Machine Learning Services in the Cloud - Datamation

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

Reinforcement Learning (RL) Market Report & Framework, 2020: An Introduction to the Technology – Yahoo Finance

Posted: at 9:52 am


without comments

Dublin, Feb. 04, 2020 (GLOBE NEWSWIRE) -- The "Reinforcement Learning: An Introduction to the Technology" report has been added to ResearchAndMarkets.com's offering.

These days, machine learning (ML), which is a subset of computer science, is one of the most rapidly growing fields in the technology world. It is considered to be a core field for implementing artificial intelligence (AI) and data science.

The adoption of data-intensive machine learning methods like reinforcement learning is playing a major role in decision-making across various industries such as healthcare, education, manufacturing, policing, financial modeling and marketing. The growing demand for more complex machine working is driving the demand for learning-based methods in the ML field. Reinforcement learning also presents a unique opportunity to address the dynamic behavior of systems.

This study was conducted in order to understand the current state of reinforcement learning and track its adoption along various verticals, and it seeks to put forth ways to fully exploit the benefits of this technology. This study will serve as a guide and benchmark for technology vendors, manufacturers of the hardware that supports AI, as well as the end-users who will finally use this technology. Decisionmakers will find the information useful in developing business strategies and in identifying areas for research and development.

The report includes:

Key Topics Covered

Chapter 1 Reinforcement Learning

Chapter 2 Bibliography

List of Tables Table 1: Reinforcement Learning vs. Supervised Learning vs. Unsupervised Learning Table 2: Global Machine Learning Market, by Region, Through 2024

List of Figures Figure 1: Reinforcement Learning Process Figure 2: Reinforcement Learning Workflow Figure 3: Artificial Intelligence vs. Machine Learning vs. Reinforcement Learning Figure 4: Machine Learning Applications Figure 5: Types of Machine Learning Figure 6: Reinforcement Learning Market Dynamics Figure 7: Global Machine Learning Market, by Region, 2018-2024

For more information about this report visit https://www.researchandmarkets.com/r/g0ad2f

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

CONTACT: ResearchAndMarkets.com Laura Wood, Senior Press Manager press@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470 For U.S./CAN Toll Free Call 1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

Go here to see the original:

Reinforcement Learning (RL) Market Report & Framework, 2020: An Introduction to the Technology - Yahoo Finance

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

Reinforcement Learning: An Introduction to the Technology – Yahoo Finance

Posted: at 9:52 am


without comments

NEW YORK, Feb. 3, 2020 /PRNewswire/ --

Report Includes: - A general framework for deep Reinforcement Learning (RL) also known as a semi-supervised learning model in machine learning paradigm

Read the full report: https://www.reportlinker.com/p05843529/?utm_source=PRN

- Assessing the breadth and depth of RL applications in real-world domains, including increased data efficiency and stability as well as multi-tasking - Understanding of the RL algorithm from different aspects; and persuade the decision makers and researchers to put more efforts on RL research

Reasons for Doing This Report: These days, machine learning (ML), which is a subset of computer science, is one of the most rapidly growing fields in the technology world.It is considered to be a core field for implementing artificial intelligence (AI) and data science.

The adoption of data-intensive machine learning methods like reinforcement learning is playing a major role in decision-making across various industries such as healthcare, education, manufacturing, policing, financial modelling and marketing.The growing demand for more complex machine working is driving the demand of learning-based methods in the ML field.

Reinforcement learning also presents a unique opportunity to address the dynamic behavior of systems. This study was conducted in order to understand the current state of reinforcement learning and track its adoption along various verticals, and it seeks to put forth ways to fully exploit the benefits of this technology.This study will serve as a guide and benchmark for technology vendors, manufacturers of the hardware that supports AI, as well as the end users who will finally use this technology.

Decisionmakers will find the information useful in developing business strategies and in identifying areas for research and development.

Read the full report: https://www.reportlinker.com/p05843529/?utm_source=PRN

About Reportlinker ReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________ Contact Clare: clare@reportlinker.com US: (339)-368-6001 Intl: +1 339-368-6001

View original content:http://www.prnewswire.com/news-releases/reinforcement-learning-an-introduction-to-the-technology-300997487.html

SOURCE Reportlinker

Continue reading here:

Reinforcement Learning: An Introduction to the Technology - Yahoo Finance

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones – Economic Times

Posted: at 9:52 am


without comments

A couple of weeks after the Coronavirus outbreak and the disease has become a full-blown pandemic. According to official Chinese statistics, more than 130 people have died from the mysterious virus.

Contagious diseases may be diagnosed by men and women in face masks and lab coats, but warning signs of an epidemic can be detected by computer programmers sitting thousands of miles away. Around the tenth of January, news of a flu outbreak in Chinas Hubei province started making its way to mainstream media. It then spread to other parts of the country, and subsequently, overseas.

But the first to report of an impending biohazard was BlueDot, a Canadian firm that specializes in infectious disease surveillance. They predicted an impending outbreak of coronavirus on December 31 using an artificial intelligence-powered system that combs through animal and plant disease networks, news reports in vernacular websites, government documents, and other online sources to warn its clients against traveling to danger zones like Wuhan, much before foreign governments started issuing travel advisories.

They further used global airline ticketing data to correctly predict that the virus would spread to Seoul, Bangkok, Taipei, and Tokyo. Machine learning and natural language processing techniques were also employed to create models that process large amounts of data in real time. This includes airline ticketing data, news reports in 65 languages, animal and plant disease networks.

iStock

We know that governments may not be relied upon to provide information in a timely fashion. We can pick up news of possible outbreaks, little murmurs or forums or blogs of indications of some kind of unusual events going on, Kamran Khan, founder and CEO of BlueDot told a news magazine.

The death toll from the Coronavirus rose to 81 in China, with thousands of new cases registered each day. The government has extended the Lunar New Year holiday by three days to restrict the movement of people across the country, and thereby lower the chances of more people contracting the respiratory disease.

However, a lockdown of the affected area could be detrimental to public health, putting at risk the domestic population, even as medical supplies dwindle, causing much anger and resentment.

24 May, 2018

24 May, 2018

24 May, 2018

24 May, 2018

24 May, 2018

Excerpt from:

This tech firm used AI & machine learning to predict Coronavirus outbreak; warned people about danger zones - Economic Times

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators – HPCwire

Posted: at 9:52 am


without comments

NEWPORT NEWS, Va., Jan. 30, 2020 More than 1,600 nuclear physicists worldwide depend on the Continuous Electron Beam Accelerator Facility for their research. Located at the Department of Energys Thomas Jefferson National Accelerator Facility in Newport News, Va., CEBAF is a DOE User Facility that is scheduled to conduct research for limited periods each year, so it must perform at its best during each scheduled run.

But glitches in any one of CEBAFs tens of thousands of components can cause the particle accelerator to temporarily fault and interrupt beam delivery, sometimes by mere seconds but other times by many hours. Now, accelerator scientists are turning to machine learning in hopes that they can more quickly recover CEBAF from faults and one day even prevent them.

Anna Shabalina is a Jefferson Lab staff member and principal investigator on the project, which has been funded by theLaboratory Directed Research & Development programfor the fiscal year 2020. The program provides the resources for Jefferson Lab personnel to make rapid and significant contributions to critical science and technology problems of mission relevance to the lab and the DOE.

Shabalina says her team is specifically concerned with the types of faults that most often bring CEBAF grinding to a halt: those that concern the superconducting radiofrequency acceleration cavities.

Machine learning is quickly gaining popularity, particularly for optimizing, automating and speeding up data analysis, Shabalina says. This is exactly what is needed to reduce the workload for SRF cavity fault classification.

SRF cavities are the backbone of CEBAF. They configure electromagnetic fields to add energy to the electrons as they travel through the CEBAF accelerator. If an SRF cavity faults, the cavity is turned off, disrupting the electron beam and potentially requiring a reconfiguration that limits the energy of the electrons that are being accelerated for experiments.

Shabalina and her team plan to use a recently deployed data acquisition system that records data from individual cavities. The system records 17 parameters from a cavity that faults; it also records the 17 parameters from a cavity if one of its near neighbors faults.

At present, system experts visually inspect each data set by hand to identify the type of fault and which component caused it. The information is a valuable tool that helps CEBAF operators for how to mitigate the fault.

Each cavity fault leaves a unique signature in the data, Shabalina says. Machine learning is particularly well suited for finding patterns, even in noisy data.

The team plans to work off of this strength of machine learning to build a model that recognizes the various types of faults. When shown enough input signals and corresponding fault types, the model is expected to be able to identify the fault patterns in CEBAFs complex signals. The next step would then be to run the model during CEBAF operations so that it can classify in real time the different kinds of faults that cause the machine to automatically trip off.

We plan to develop machine learning models to identify the type of the fault and the cavity causing instability. This will give operators the ability to apply pointed measures to quickly bring the cavities back online for researchers, Shabalina explains.

If successful, the project would also open the possibility of extending the model to identify precursors to cavity trips, so that operators would have an early warning system of possible faults and can take action to prevent them from ever occurring.

About Jefferson Science Associates, LLC

Jefferson Science Associates, LLC, a joint venture of the Southeastern Universities Research Association, Inc. and PAE, manages and operates the Thomas Jefferson National Accelerator Facility, or Jefferson Lab, for the U.S. Department of Energys Office of Science. DOEs Office of Science is the single largest supporter of basic research in the physical sciences in the United Statesand is working to address some of the most pressing challenges of our time. For more information, visithttps://energy.gov/science.

Source: Thomas Jefferson National Accelerator Facility (Jefferson Lab)

See the original post:

New Project at Jefferson Lab Aims to Use Machine Learning to Improve Up-Time of Particle Accelerators - HPCwire

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news – The Real Deal

Posted: at 9:52 am


without comments

New York City rentals (Credit: iStock)

Digital marketplace gets a boost

CRE digital marketplace CREXi nabbed $30 million in a Series B round led by Mitsubishi Estate Company, Industry Ventures, and Prudence Holdings. The new funds will help them build out a subscription service aimed at brokers and an analytics service that highlights trends in the industry. The company wants to become the go-to platform for every step in the CRE process, from marketing to sale.

Dude, wheres my tech-fueled hotel chain?

Ashton Kutchers Sound Ventures and travel-focused VC firm Thayer Ventures have gotten behind hospitality startup Life House, leading a $30 million Series B round. The company runs a boutique hotel chain as well as a management platform, which gives hotel owners access to AI-based pricing and automated financial accounting. Life House has over 800 rooms across cities such as Miami and Denver, with plans to expand to 25 hotels by next year.

Working from home

As the deadly Coronavirus virus outbreak becomes more serious with every hour, WeWork said it is temporarily closing 55 locations in China. The struggling co-working company encouraged employees at these sites to work from home or in private rooms to keep from catching the virus. Also this week, the startup closed a three-year deal to provide office space for 250 employees of gym membership company Gympass, per Reuters. WeWorks owner SoftBank is a minority investor in Gympass so it looks like Masa Sons using some parts of his portfolio to prop up others.

300,000

Thats how many listings rental platform/flatmate matcher Badi has across London, Berlin, Madrid, and Barcelona. Barcelona-based Badi claims to use machine-learning technology to match tenants and rooms. Badi plans on hopping across the pond to New York City within the year. Its an interesting market for the Barcelona-based company to enter. Though most people use a platform like StreetEasy to find an apartment with a traditional landlord, few established companies have cracked the sublet game without running afoul New York Citys rental laws. In effect, Badi would likely be competing with Facebook groups such as Gypsy Housing plus wanna-be-my-roommate startups like Roomi and SpareRoom. Badi is backed by Goodwater Capital, Target Global, Spark Capital and Mangrove Capital. The firm has raised over $45 million in VC funding since its founding in 2015.

Pink slips at Compass

Uh oh, yet another SoftBank-funded startup is laying off employees. Up to 40 employees of tech brokerage Compass in the IT, marketing and M&A departments will be getting the pink slip this week. Sources told E.B. Solomont that the nationwide cuts are a part of a reorganization to introduce a new Agent Experience Team that will take over onboarding and training new agents from former employees. Its a small number of cuts compared to the 18,000 employees Compass has across the U.S. but it isnt a great look in todays business climate.

Getting ready to move

As SoftBank-backed hospitality startup Oyo continues to cut back, their arch nemesis RedDoorz just launched a new co-living program in Indonesia. Theyre targeting young professionals and college students with the KoolKost service, dishing out shared units with flexible leases and free WiFi. Their main business, like Oyo, is running a network of budget hotels across Southeast Asia. Well see if co-living will help them avoid some of Oyos profitability problems.

Homes on Olympus

Its no secret that it can be a pain to figure out a place to live when work needs you to move to a new city for a bit. You can take your pick between bland corporate housing and Airbnbs designed for quick vacations. Thats where Zeus comes in (not with a thunderbolt but with a corporate housing platform.)

Zeus signs two-year minimum leases with landlords, furnishes the apartments with couches meant to look chic, and rents them out to employees for 30 days or more. They currently manage around 2,000 furnished homes with the goal of filling a newly added apartment within 10 days.

The corporate housing is a competitive space with startups like Domio and Sonder also trying to lure in business travelers. Youd think that Zeus would have to go one-on-one with Airbnb but the two companies actually have a partnership. The short-term rental giant lists Zeus properties on its platform and invested in the company as a part of a $55 million Series B round last month. Theyre trying to keep competition close.

Punch lists go digital

Home renovations platform Punch List just scored $4 million in a seed round led by early stage VC funds Bling Capital and Bedrock Capital, per Crunchbase. The platform lets homeowners track project progress and gives contractors a place to send digital invoices, all on a newly launched app. They want to make as much of the frustrating process of remodeling as digital as possible.

See original here:

Euro machine learning startup plans NYC rental platform, the punch list goes digital & other proptech news - The Real Deal

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care – UB Now: News and views for UB faculty and staff – University at Buffalo…

Posted: at 9:52 am


without comments

A multidisciplinary UB research team has received an $800,000 grant to develop a machine learning system that could eventually help caseworkers and human services agencies determine the best available services for the more than 20,000 youth who annually age out of foster care without rejoining their families.

The National Science Foundation and Amazon, the grants joint funders, have partnered on a program called Fairness in Artificial Intelligence (FAI) that aims to address bias and build trustworthy computational systems that can contribute to solving the biggest challenges facing modern societies.

Over the course of three years, the UB researchers will collaborate with the Hillside Family of Agencies in Rochester, one of the oldest family and youth nonprofit human services organizations in the country, and a youth advisory council made up of individuals who have recently aged out of foster care to develop the tool. They will also consult with national experts across specializations to inform this complex work.

Researchers will use data from the Administration on Children and Families (ACF) federally mandated National Youth in Transition Database (NYTD) and input from collaborators to inform their predictive model. Each state participates in NYTD to report the experiences and services used by youth in foster care.

The teams three-pronged goal is to use the experiences of youth, case workers and experts in the foster care system to identify the often hard-to-find biases in data used to train machine learning models, to obtain multiple perspectives on fairness with respect to decisions about services and to then build a system that can more equitably and efficiently deliver services.

Social scientists have long considered questions of fairness and justice in societies, but beginning in the early part of the 21st century, there was growing awareness of how computers might be using unfair algorithms, according to Kenneth Joseph, assistant professor in the Department of Computer Science and Engineering and one of the co-investigators of the project.

Joseph is an expert in machine learning who focuses much of his research on better understanding how biases work their way into computational models, and how to understand and address the social and technical processes responsible for doing so.

Machine learning is any computer program that can help extract patterns in data. Unsupervised learning identifies patterns, while supervised learning tries to predict something based on those patterns.

Our supervised problem is to take the information available about a particular child and make a prediction about how to allocate services, says Joseph. Our goal is to help social workers identify youth who might benefit from preventative services, while doing so in a manner that participants within the system feel is fair and equitable.

We also want our approach to have applications beyond foster care, so that eventually our approach can be used in other public service settings.

A machine learning models greatest asset, however, might also be its greatest liability. Machine learning algorithms learn from no other source other than the data theyre provided. If the original data is biased, Joseph says the algorithm will learn and echo those biases.

For instance, models for loan distribution derived from data that gives income- and geography-based preferences to applicants could be using information with inherent race, ethnicity and gender disparities.

There are many ways algorithms can be unfair, and very few of them have anything to do with math, says Joseph.

Finding and correcting those biases raises questions about using computers to make decisions affecting what is already a vulnerable population.

By age 19, 47% of foster care youth who have not been reunited with their families have not finished high school, 20% have experienced homelessness and 27% of males have been incarcerated, according to the AFCs Childrens Bureau.

But Melanie Sage, assistant professor in the School of Social Work and another of the grants co-principal investigators, says this project is about providing caseworkers with an additional tool to help inform not replace their decision-making.

We never want algorithms to replace the decisions made by trained professionals, but we do need information about how to make decisions based on likely outcomes and what the data tell us about pathways for children in foster care, she says.

Sage says their work on this grant is critical given the generational impact caseworkers and agencies have on the lives of foster youth.

When a determination is made that services should be provided for protection because kids are not better off with their families, those kids are deserving of the best services and interventions that the child welfare system can offer, she says. This research ideally gives us another tool that helps make that happen.

The projects other co-investigators are Varun Chandola, assistant professor of computer science and engineering; Huei-Yen Chen, assistant professor of industrial and systems engineering; and Atri Rudra, associate professor of computer science and engineering.

See the article here:

UB receives $800,000 NSF/Amazon grant to improve AI fairness in foster care - UB Now: News and views for UB faculty and staff - University at Buffalo...

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning

The Human-Powered Companies That Make AI Work – Forbes

Posted: at 9:52 am


without comments

Machine learning models require human labor for data labeling

The hidden secret of artificial intelligence is that much of it is actually powered by humans. Well, to be specific, the supervised learning algorithms that have gained much of the attention recently are dependent on humans to provide well-labeled training data that can be used to train machine learning algorithms. Since machines have to first be taught, they cant teach themselves (yet), so it falls upon the capabilities of humans to do this training. This is the secret achilles heel of AI: the need for humans to teach machines the things that they are not yet able to do on their own.

Machine learning is what powers todays AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.

According to analyst firm Cognilytica, over 80% of AI project time is spent preparing and labeling data for use in machine learning projects:

Percentage of time allocated to machine learning tasks (Source: Cognilytica)

(Disclosure: Im a principal analyst at Cognilytica)

Fully one quarter of this time is spent providing the necessary labels on data so that supervised machine learning approaches will actually achieve their learning objectives. Customers have the data, but they dont have the resources to label large data sets, nor do they have a mechanism to insure accuracy and quality. Raw labor is easy to come by, but its much harder to guarantee any level of quality from a random, mostly transient labor force. Third party managed labeling solution providers address this gap by providing the labor force to do the labeling combined with the expertise in large-scale data labeling efforts and an infrastructure for managing labeling workloads and achieving desired quality levels.

According to a recent report from research firm Cognilytica, over 35 companies are currently engaged in providing human labor to add labels and annotation to data to power supervised learning algorithms. Some of these firms use general, crowdsourced approaches to data labeling, while others bring their own, managed and trained labor pools that can address a wide range of general and domain-specific data labeling needs.

As detailed in the Cognilytica report, the tasks for data labeling and annotation depend highly on the sort of data to be labeled for machine learning purposes and the specific learning task that is needed. The primary use cases for data labeling fall into the following major categories:

These labeling tasks are getting increasingly more complicated and domain-specific as machine learning models are developed that can handle more general use cases. For example, innovative medical technology companies are building machine learning models that can identify all manner of concerns within medical images, such as clots, fractures, tumors, obstructions, and other concerns. To build these models requires first training machine learning algorithms to identify those issues within images. To train the machine learning models requires lots of data that has been labeled with the specific areas of concern identified. To accomplish that labeling task requires some level of knowledge as to how to identify a particular issue and the knowledge of how to appropriately label it. This is not a task for the random, off-the-street individual. This requires some amount of domain expertise.

Consequently, labeling firms have evolved to provide more domain-specific capabilities and expanded the footprint of their offerings. As machine learning starts to be applied to ever more specific areas, the needs for this sort of domain-specific data labeling will only increase. According to the Cognilytica report, the demand for data labeling services from third parties will grow from $1.7 Billion (USD) in 2019 to over $4.1B by 2024. This is a significant market, much larger than most might be aware of.

Increasingly, machines are doing this work of data labeling as well. Data labeling providers are applying machine learning to their own labeling efforts to perform some of the work of labeling, perform quality control checks on human labor, and optimize the labeling process. These firms use machine learning inferencing to identify data types, things that dont match the structure of a data column, potential data quality or formatting issues, and provides recommendations to users for how they could clean the data. In this way, machine learning is helping the process of improving machine learning. AI applied to AI. Quite interesting.

For the foreseeable future, the need for human-based data labeling for machine learning will not diminish. If anything, the use of machine learning continues to grow into new domains that require new knowledge to be built and learned by systems. This in turn requires well-labeled data to learn in those new domains, and in turn, requires the services of the hidden army of human laborers making AI work as well as it does today.

View original post here:

The Human-Powered Companies That Make AI Work - Forbes

Written by admin

February 4th, 2020 at 9:52 am

Posted in Machine Learning


Page 31«..1020..30313233..»



matomo tracker