Page 21«..10..20212223..30..»

Archive for the ‘Machine Learning’ Category

8 Trending skills you need to be a good Python Developer – iLounge

Posted: September 20, 2020 at 10:55 pm


without comments

Python, the general-purpose coding language has gained much popularity over the years. Speaking of web development, app designing, scientific computing or machine learning, Python has it all. Due to this favourability of Python in the market, python developers are also in high demand. They are required to be competent and out of the box thinkers- undoubtedly a race to win.

Are you one of those python developers? Do you find yourself lagging behind in proving your reliability? Maybe you are going wrong with some of your

skills. Never mind!

Im here to tell you of those 8 trendsetting skills you need to hone. Implement them and prove your expertise in the programming world. Come, lets take a look!

Being able to use the Python Library in its full potential also decides your expertise with this programming language. Python libraries like Panda, Matplotlib, Requests, Pyglet and more consist of reusable codes that youd wish to add to your programs. These libraries are boon to you as a developer. They will increase workflow and make task execution way easier. Nothing saves more time from having to write the whole code every time.

You might know how Python omits repeated code by using pre-developed frameworks. As a developer using a Python framework, you typically write code which conforms to some kind of conventions. Because of which it becomes easy to delegate responsibilities for the communications, infrastructure and low-level stuff to the framework. You can, therefore, concentrate on the logic of the application in your own code. If you have a good knack of these Python frameworks it can be a blessing, as it allows smooth flow of development. You may not know them all, but its advisable to keep up with some popular ones like Flask, Django and CherryPy.

Not sure of Python frameworks? You can seek help from Python Training Courses.

Object-relational mapping (ORM) is a programming method used to access a database. It exposes your database into a series of objects without writing commands to insert or retrieve data. It may sound complex, but can save you a lot of time, and help to control access to your database. ORM tools can also be customised by a Python developer.

Front end technologies like the HTML5, CSS3, and JavaScript will help you collaborate and work with a team of designers, marketers and other developers. Again, this can save a lot of development time.

A good Python developer should have sharp analytical skills. You are expected to observe and critically come up with complex ideas, solutions or decisions about coding. Talking of the analytical skills in Python you need to have:

Analytical skills are a mark of your additional knowledge in the field. Building your analytical skills also make you a better problem solver.

Python Developers have a bright future in Data Science. Companies on the run will prefer developers with Data Science knowledge to create innovative tech solutions. Knowing Python will also gain your knowledge of probability, statistics, data wrangling and SQL. All of these are significant aspects of Data Science.

Python is the right choice to grow in the Artificial Intelligence and Machine learning domain. It is an intuitive and minimalistic language with a full-featured library line (also called frameworks) which considerably reduces the time required to get your first results.

However, to master artificial intelligence and machine learning with Python you need to have a strong command over Python syntax. A fair grounding with calculus, data science and statistics can make you a pro. If you are a beginner, you can gain expertise in these areas by brushing up your math skills for Python Mathematical Libraries. Gradually, you can acquire your adequate Machine Learning skills by building simple neutral networks.

In the coming years, deep learning professionals will be well-positioned as there is a huge possibility awaiting in this field. With Python, you should be able to easily develop and evaluate deep learning models. Since deep learning is the advanced model of Machine Learning, to be able to bring it into complete functionality you should first get hands-on:

A good python developer is also a mixture of several soft skills like proactivity, communication and time management. Most of all, a career as a Python Developer is challenging, but at the same time interesting. Empowering yourself with these skill sets is sure to take you a long way. Push yourself from the comfort zone and work hard from today!

Read more:

8 Trending skills you need to be a good Python Developer - iLounge

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

Automation Continuum – Leveraging AI and ML to Optimise RPA – Analytics Insight

Posted: at 10:55 pm


without comments

Over the past year, the appropriation of robotic process automation, especially progressed macros or robotic workers intended to automate the most ordinary, dull and time-exorbitant tasks has seen significant growth. As the technology develops alongside artificial intelligence and machine learning, the most encouraging future for knowledge workers is one where the simplicity of arrangement of RPA and the raw power of machine learning join to make more productive, more intelligent robotic workers.

One of the keys for adoption is companies would prefer not to trouble individuals with a lot of new tools and permit their environments to learn. For each situation, what companies attempt to do is, whatever UI theyre working in, possibly they make a widget, perhaps theres a dashboard that we can add a panel to that that would contain the data that is required. Adding to their current UI or including a stage over that routes things to the correct individual so they dont see 80% of the cases that they would have seen because they were automatically delegated and never got there.

Despite the fact that RPA today is invading pretty much every industry, the significant adopters of this tech are banks, insurance agencies, telecom firms and service organizations. This is on the grounds that organizations in these divisions for the most part have legacy systems and RPA solutions get effectively incorporated with their current functionalities.

Artificial intelligence is essentially about a computers capacity to imitate the human attitude, regardless of whether its tied in with recognising a picture or even taking care of an issue or a discussion.

You can consider Facebooks AI Research to see better. Here, the social media giant feeds the AI system with various pictures and the machine delivers accurate results. When a photograph of a dog is shown to the machine, it remembers it as a dog as well as recognised the breed.

RPA is an innovation that utilizes a particular set of rules and an algorithm and based on that it automates a task. While AI is centered more around doing a human-level undertaking, RPA is essentially a product that lessens human endeavors, it is tied in with saving the business and white-collar workers time. Probably the most well-known instances of RPA are moving data from one system to another, payroll processing, forms processing etc.

Despite the fact that AI strides ahead than RPA, these two technologies have the ability to take things to the next level if both are combined. For instance, assume you need your reports to be in a particular format to get them checked, and RPA carries out this responsibility. If you utilize an AI framework that would sift through the ineffectively formatted or inadmissible archives, the work of the RPA would be a lot simpler. Furthermore, this joint effort is called Automation Continuum.

The development of GPT-3, Generative Pre-prepared Transformer 3, is an incredible innovation that utilizes AI to use the immense amount of language data on the internet. Via training an extraordinarily enormous neural network, GPT-3 can comprehend and produce both human and programming languages with close human performance. For example, given a couple of sets of lawful agreements and plain English records, it can begin to automate the task of writing legal contracts in plain English. This sort of sophisticated automation was unimaginable with old style RPA devices without utilizing data and best in class AI.

Numerous activities, while redundant, require understanding and thought by a human with information and experience. Furthermore, this is the place the upcoming age of RPA devices can use AI. Humans are truly adept at responding to the topic of What else is significant or fascinating?. Artificial intelligence will help RPA tools go farther than essentially adding more factors to an inquiry. Artificial intelligence will permit RPA to make the next stride and answer the topic of What else?. Basically, the use of AI to RPA will permit these tools to expand the scope of what they can do.

Indeed, even giant organizations like IBM, Microsoft and SAP are tapping increasingly more to RPA. Which means, they are expanding the awareness and foothold of RPA programming. Besides, new sellers are additionally rising and at a fast pace and have begun to stamp their presence in the business.

Notwithstanding, it isnt simply RPA that is the discussion of the town, the function of AI is additionally one of the most huge things at present. The concept of Automation Continuum is getting famous among a lot of companies. The business is currently seeing their capacities and why not AI can read, tune in, and analyse and afterwards feed data into bots that can create output, package it, and send it off. Eventually, RPA and AI are two significant technologies that companies can use to help their companys digital transformation.

With organizations going through digital transformation, and maybe accelerating their efforts to deal with the effects of Covid-19 on their workforces, data is getting progressively significant. The optimization of RPA will profit extraordinarily from increased digitization in organizations. As organizations create data lakes and other new data storehouses that are available through APIs, it is critical to permit RPA tools to approach so they can be optimized.

While RPA has conveyed noteworthy advantages with regards to automation, the upcoming age of RPA will deliver more advantages using AI and machine learning through optimization. This isnt about quicker automation however, about better automation.

Read the original post:

Automation Continuum - Leveraging AI and ML to Optimise RPA - Analytics Insight

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

UT Austin Selected as Home of National AI Institute Focused on Machine Learning – UT News | The University of Texas at Austin

Posted: August 27, 2020 at 3:50 am


without comments

AUSTIN, Texas The National Science Foundation has selected The University of Texas at Austin to lead the NSF AI Institute for Foundations of Machine Learning, bolstering the universitys existing strengths in this emerging field. Machine learning is the technology that drives AI systems, enabling them to acquire knowledge and make predictions in complex environments. This technology has the potential to transform everything from transportation to entertainment to health care.

UT Austin already among the worlds top universities for artificial intelligence is poised to develop entirely new classes of algorithms that will lead to more sophisticated and beneficial AI technologies. The university will lead a larger team of researchers that includes the University of Washington, Wichita State University and Microsoft Research.

This is another important step in our universitys ascension as a world leader in machine learning and tech innovation as a whole, and I am grateful to the National Science Foundation for their profound support, said UT Austin interim President Jay Hartzell. Many of the worlds greatest problems and challenges can be solved with the assistance of artificial intelligence, and its only fitting, given UTs history of accomplishment in this area along with the booming tech sector in Austin, that this new NSF institute be housed right here on the Forty Acres.

UT Austin is simultaneously establishing a permanent base for campuswide machine learning research called the Machine Learning Laboratory. It will house the new AI institute and bring together computer and data scientists, mathematicians, roboticists, engineers and ethicists to meet the institutes research goals while also working collaboratively on other interdisciplinary projects. Computer science professor Adam Klivans, who led the effort to win the NSF AI institute competition, will direct both the new institute and the Machine Learning Lab. Alex Dimakis, associate professor of electrical and computer engineering, will serve as the AI institutes co-director.

Machine learning can be used to predict which of thousands of recently formulated drugs might be most effective as a COVID-19 therapeutic, bypassing exhaustive laboratory trial and error, Klivans said. Modern datasets, however, are often diffuse or noisy and tend to confound current techniques. Our AI institute will dig deep into the foundations of machine learning so that new AI systems will be robust to these challenges.

Additionally, many advanced AI applications are limited by computational constraints. For example, algorithms designed to help machines recognize, categorize and label images cant keep up with the massive amount of video data that people upload to the internet every day, and advances in this field could have implications across multiple industries.

Dimakis notes that algorithms will be designed to train video models efficiently. For example, Facebook, one of the AI institutes industry partners, is interested in using these algorithms to make its platform more accessible to people with visual impairments. And in a partnership with Dell Medical School, AI institute researchers will test these algorithms to expedite turnaround time for medical imaging diagnostics, possibly reducing the time it takes for patients to get critical assessments and treatment.

The NSF is investing more than $100 million in five new AI institutes nationwide, including the $20 million project based at UT Austin to advance the foundations of machine learning.

In addition to Facebook, Netflix, YouTube, Dell Technologies and the city of Austin have signed on to transfer this research into practice.

The institute will also pursue the creation of an online masters degree in AI, along with undergraduate research programming and online AI courses for high schoolers and working professionals.

Austin-based tech entrepreneurs Zaib and Amir Husain, both UT Austin alumni, are supporting the new Machine Learning Laboratory with a generous donation to sustain its long-term mission.

The universitys strengths in computer science, engineering, public policy, business and law can help drive applications of AI, Amir Husain said. And Austins booming tech scene is destined to be a major driver for the local and national economy for decades to come.

The Machine Learning Laboratory is based in the Department of Computer Science and is a collaboration among faculty, researchers and students from across the university, including Texas Computing; Texas Robotics; the Department of Statistics and Data Sciences; the Department of Mathematics; the Department of Electrical and Computer Engineering; the Department of Information, Risk & Operations Management; the School of Information; the Good Systems AI ethics grand challenge team; the Oden Institute for Computational Engineering and Sciences; and the Texas Advanced Computing Center (TACC).

See the article here:

UT Austin Selected as Home of National AI Institute Focused on Machine Learning - UT News | The University of Texas at Austin

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Participation-washing could be the next dangerous fad in machine learning – MIT Technology Review

Posted: at 3:50 am


without comments

More promising is the idea of participation as justice. Here, all members of the design process work together in tightly coupled relationships with frequent communication. Participation as justice is a long-term commitment that focuses on designing products guided by people from diverse backgrounds and communities, including the disability community, which has long played a leading role here. This concept has social and political importance, but capitalist market structures make it almost impossible to implement well.

Machine learning extends the tech industrys broader priorities, which center on scale and extraction. That means participatory machine learning is, for now, an oxymoron. By default, most machine-learning systems have the ability to surveil, oppress, and coerce (including in the workplace). These systems also have ways to manufacture consentfor example, by requiring users to opt in to surveillance systems in order to use certain technologies, or by implementing default settings that discourage them from exercising their right to privacy.

Given that, its no surprise that machine learning fails to account for existing power dynamics and takes an extractive approach to collaboration. If were not careful, participatory machine learning could follow the path of AI ethics and become just another fad thats used to legitimize injustice.

How can we avoid these dangers? There is no simple answer. But here are four suggestions:

Recognize participation as work.Many people already use machine-learning systems as they go about their day. Much of this labor maintains and improves these systems and is therefore valuable to the systems owners. To acknowledge that, all users should be asked for consent and provided with ways to opt out of any system. If they chose to participate, they should be offered compensation. Doing this could mean clarifying when and how data generated by a users behavior will be used for training purposes (for example, via a banner in Google Maps or an opt-in notification). It would also mean providing appropriate support for content moderators, fairly compensating ghost workers, and developing monetary or nonmonetary reward systems to compensate users for their data and labor.

Make participation context specific. Rather than trying to use a one-size-fits-all approach, technologists must be aware of the specific contexts in which they operate. For example, when designing a system to predict youth and gang violence, technologists should continuously reevaluate the ways in which they build on lived experience and domain expertise, and collaborate with the people they design for. This is particularly important as the context of a project changes over time. Documenting even small shifts in process and context can form a knowledge base for long-term, effective participation. For example, should only doctors be consulted in the design of a machine-learning system for clinical care, or should nurses and patients be included too? Making it clear why and how certain communities were involved makes such decisions and relationships transparent, accountable, and actionable.

Plan for long-term participation from the start. People are more likely to stay engaged in processes over time if theyre able to share and gain knowledge, as opposed to having it extracted from them. This can be difficult to achieve in machine learning, particularly for proprietary design cases. Here, its worth acknowledging the tensions that complicate long-term participation in machine learning, and recognizing that cooperation and justice do not scale in frictionless ways. These values require constant maintenance and must be articulated over and over again in new contexts.

Learn from past mistakes. More harm can be done by replicating the ways of thinking that originally produced harmful technology. We as researchers need to enhance our capacity for lateral thinking across applications and professions. To facilitate that, the machine-learning and design community could develop a searchable database to highlight failures of design participation (such as Sidewalk Labs waterfront project in Toronto). These failures could be cross-referenced with socio-structural concepts (such as issues pertaining to racial inequality). This database should cover design projects in all sectors and domains, not just those in machine learning, and explicitly acknowledge absences and outliers. These edge cases are often the ones we can learn the most from.

Its exciting to see the machine-learning community embrace questions of justice and equity. But the answers shouldnt bank on participation alone. The desire for a silver bullet has plagued the tech community for too long. Its time to embrace the complexity that comes with challenging the extractive capitalist logic of machine learning.

Mona Sloane is a sociologist based at New York University. She works on design inequality in the context of AI design and policy.

Here is the original post:

Participation-washing could be the next dangerous fad in machine learning - MIT Technology Review

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Getting to the heart of machine learning and complex humans – The Irish Times

Posted: at 3:50 am


without comments

Abeba Birhane: I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person.

You recently made a big discovery that an academic library containing millions of images used to train artificial intelligence systems had privacy and ethics issues, and that it included racist, misogynistic and other offensive content.

Yes, I worked on this with Vinay Prabhu a chief scientist at UnifyID, a privacy start-up in Silicon Valley on the 80-million images dataset curated by Massachusetts Institute of Technology. We spent about months looking through this dataset, and we found thousands of images labelled with insults and derogatory terms.

Using this kind of content to build and train artificial intelligence systems, including face recognition systems, would embed harmful stereotypes and prejudices and could have grave consequences for individuals in the real world.

What happened when you published the findings?

The media picked up on it, so it got a lot of publicity. MIT withdrew the database and urged people to delete their copies of the data. That was humbling and a nice result.

How does this finding fit in to your PhD research?

I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person. The background assumption is that people are ambiguous, they come to be who they are through interactions with other people.

It is a different perspective to traditional cognitive science, which is all about the brain and rationality. My research looks at how artificial intelligence and machine learning has limits in how it can understand and predict the complex messiness of human behaviour and social outcomes.

Can you give me an example?

If you take the Shazam app, it works very well to recognise a piece of music that you play to it. It searches for the pattern of the music in a database, and this narrow search suits the machine approach. But predicting a social outcome from human characteristics is very different.

As humans we have infinite potentials, we can react to situations in different ways, and a machine that uses numerable parameters cannot predict whether someone is a good hire or at risk of committing a crime in the future. Humans and our interactions represent more than just a few parameters. My research looks at existing machine learning systems and the ethics of this dilemma.

How did you get into this work?

I started in physics back home in Ethiopia, but when I came to Ireland there was so much paperwork and so many exams to translate my Ethiopian qualification that I decided to start from scratch.

So I studied psychology and philosophy and I did a masters [masters course had lots of elements neuroscience, philosophy, anthropology, and computer science, where we built computational models of various cognitive faculties and it is where I really found my place.

How has Covid-19 affected your research?

At the start of the pandemic, I thought this might be a chance to write up a lot of my project, but I found it hard to work at home and to unhook my mind from what was going on around the world.

I also missed the social side, going for coffee and talking with my colleagues about work and everything else. So I am glad to be back in the lab now and seeing my lab mates even at a distance.

Original post:

Getting to the heart of machine learning and complex humans - The Irish Times

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Air Force Taps Machine Learning to Speed Up Flight Certifications – Nextgov

Posted: at 3:50 am


without comments

Machine learning is transforming the way an Air Force office analyzes and certifies new flight configurations.

The Air Force SEEK EAGLE Office sets standards for safe flight configurations by testing and looking at historical data to see how different storeslike a weapon system attached to an F-16affect flight. A project AFSEO developed along with industry partners can now automate up to 80% of requests for analysis, according to the offices Chief Data Officer Donna Cotton.

The application is kind of like an eager junior engineer consulting a senior engineer, Cotton said. It makes the straightforward calls without any input, but in the hard cases it walks into the senior engineers office and says: Hey, I did a bunch of research and this is what I found out. Can you give me your opinion?

Cotton spoke at a Tuesday webinar hosted by Tamr, one of the industry partners involved in the project. Tamr announced July 30 AFSEO awarded the company a $60 million contract for its machine learning application. Two other companies, Dell and Cloudera, helped AFSEO take decades of historical data from simulations, performance studies and the like that were siloed across various specialities and organize them into a searchable data lake.

On top of this new data architecture, the machine learning application provided by Tamr searches through all the historical data to find past records that can help answer new safety recommendation requests automatically.

This tool is critical because the vast majority of AFSEOs flight certification recommendations are made by analogy, meaning using previous data rather than new flight tests. But in the past, data was disorganized and lacked unification. This made tracking down these helpful records a challenge for engineers.

Now, a cleaner AFSEO data lake cuts the amount of time engineers waste on looking for the information they need. Machine learning further speeds up the process by generating safety reports automatically while still keeping the professional engineers in the loop. Even when engineers need to produce original research, the machine learning application can smooth the process by collecting related records to serve as a jumping off point.

The new process helps AFSEO avoid doing costly flight tests while also increasing confidence that the team is making the safety certification correctly with all the information available to them, Cotton said.

We are able to be more productive, Cotton said. It's saving us a lot of money because for us, it's not about profit, but it's about hours. It's about how much effort are we going to have to use to solve or to answer a new request.

See the rest here:

Air Force Taps Machine Learning to Speed Up Flight Certifications - Nextgov

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

The Role of Artificial Intelligence and Machine Learning in the… – Insurance CIO Outlook

Posted: at 3:50 am


without comments

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

FREMONT, CA: Technology has become the dominant force across all businesses in the last few years. Disruptive technologies like Artificial Intelligence (AI), machine learning, and natural language processing are improving rapidly and quickly, evolving from theoretical to practical applications. These technologies have also made an impact on insurance agents and brokers. Many people continue to view technology as their foe. They either believe that machines will eventually replace them, or that a machine can never do their job better than them. While this may not be true, some aspects of it are relatable. For instance, a machine will never be able to provide real-time advice as a live agent does. However, low cost and easy to use platforms are currently available that allow agents and brokers to take advantage of this technology to enhance their delivery of advice and expertise to prospects and clients.

Machine learning has proven to be useful for insurance agents and brokers in various ways. These include capturing knowledge, skills, and expertise from a generation of insurance staff before they retire in the next 5 to 10 years and use it to train new employees.

Employee Augmentation

It helps provide personalized answers for a wide range of insurance questions. Digital customers want to get answers for their questions anytime and not just when an agent's office is open.

Personalized Digital Answers

It helps create and deliver a digital annual account review for personal lines or small commercial insurance accountants. A robust analysis leads to client satisfaction, creates cross-selling opportunities, and reduces errors and omission problems for the agency.

Digital Account Review

Many believe that artificial intelligence and machine learning will be the end of insurance agents as a trusted source for adequate protection against financial losses. However, these technologies are a threat only for insurance agents that are simply order takers. Insurance agents and brokers that embrace the technologies will always find opportunities to grow.

These emerging technologies mustn't be seen as a bane but as a boon. Insurance agents and brokers need to work in tandem with the upgrades in technology and leverage it to the best use. It holds increased potential to enhance customer satisfaction and offer a higher quality of service.

See Also:Top Machine Learning Companies

View post:

The Role of Artificial Intelligence and Machine Learning in the... - Insurance CIO Outlook

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT – Crowdfund Insider

Posted: at 3:50 am


without comments

The decentralized finance (DeFi) space is growing rapidly. Oracle protocols like Chainlink, BAND and Gravity have experienced a significant increase in adoption in a cryptocurrency market thats still highly speculative and plagued by market manipulative and wash trading.

Fetch.ai, an open-access machine learning network established by former DeepMind investors and software engineers, has teamed up with Waves, an established, open-source blockchain protocol that provides developer tools for Web 3.0 applications.

As mentioned in an update shared with Crowdfund Insider:

[Fetch.ai and Waves will] conduct joint R&D for the purpose of bringing increased multi-chain capabilities to Fetch.ais system of autonomous economic agents (AEA). [They will also] push further into bringing DeFi cross-chain by connecting with Waves blockchain agnostic and interoperable decentralized cross-chain and oracle network, Gravity.

As explained in the announcement, the integration with Gravity will enable Fetch.ais Autonomous Economic Agents to gain access to data sources or feeds for several different market pairs, commodities, indices, and futures.

Fetch.ai and Waves aim to achieve closer integration with Gravity in order to provide seamless interoperability to Fetch.ai, making its blockchain-based AI and machine learning (ML) solutions accessible across various distributed ledger technology (DLT) networks.

As stated in the update, the integration will help with opening up new ways for all Gravity-connected communities to use Fetch.ais ML functionality within the comfort of their respective ecosystems.

As noted in another update shared with CI, a PwC report predicts that AI and related ML technologies may contribute more than $15 trillion to the world economy from 2017 through 2030. Gartner reveals that during 2019, 37% of organizations had adopted some type of AI into their business operations.

In other DeFi news, Chainlink competitor Band Protocol is securing oracle integration with Nervos, which is a leading Chinese blockchain project.

As confirmed in a release:

Nervos is a Chinese public blockchain thats tooling up for a big DeFi push. The project is building DeFi platforms with China Merchants Bank International and Huobi, and also became one of the first public blockchains to integrate with Chinas BSN. Amid the DeFi surge, Nervos is integrating Bands oracles to give developers access to real-world data like crypto price feeds.

.

See the original post:

AI and Machine Learning Network Fetch.ai Partners Open-Source Blockchain Protocol Waves to Conduct R&D on DLT - Crowdfund Insider

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

AI may not predict the next pandemic, but big data and machine learning can fight this one – ZDNet

Posted: at 3:50 am


without comments

In April, at the height of the lockdown, computer-science professor lex Arenas predicted that a second wave of coronavirus was highly possible this summer in Spain.

At the time, many scientists were still confident that high temperature and humidity would slow the impact and spread of the virus over the summer months, as happens with seasonal flu.

Unfortunately, Arenas' predictions have turned out to be accurate. Madrid, the Basque country, Aragon, Catalonia, and other Spanish regions are currently dealing with a surge in COVID-19 cases, despite the use of masks, hand-washing and social distancing.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Admittedly, August is not as bad as March for Spain, but it's still not a situation many foresaw.

Arenas' predictions were based on mathematical modeling and underline the important role technology can play in the timing of decisions about the virus and understanding its spread.

"The virus does as we do," says Arenas. So analyzing epidemiological, environmental and mobility data becomes crucial to taking the right actions to contain the spread of the virus.

To help deal with the pandemic, the Catalan government has created a public-private health observatory. It brings together the efforts of the administration, the Hospital Germans Trias i Pujol and several research centers, such as the Center of Innovation for Data Tech and Artificial Intelligence (CIDAI), the Technology Center Eurecat, the Barcelona Supercomputing Center (BSC), the University Rovira i Virgili and the University of Girona, as well as the Mobile World Capital Barcelona.

The Mobile World Capital Barcelona brings to bear the GSMA AI for Impact initiative, which is guided by a taskforce of 20 mobile operators and an advisory panel from 12 UN agencies and partners.

Beyond the institutions, there is a real desire to join forces to respond to the virus using technology. Dani Marco, director general of innovation and the digital economy in the government of Catalonia, makes it clear that "having comparative data on the flu and SARS-CoV-2, mobility, meteorology and population census does help us react quicker and more efficiently against the pandemic".

Data comes from public databases and also from mobile operators, which provide mobility records. It is all anonymized to avoid privacy concerns.

However, the diversity of the sources of the data is a problem. Miguel Ponce de Len, a postdoctoral researcher at BSC, the center hosting the project's database, says the data coming from the regions is heterogeneous because it is based on various standards.

So one of the main tasks at BSC is cleaning data to make it usable in predicting trends and building dashboards with useful information. The goal is having lots of models running on BSC's supercomputers to answer a range of questions how public mobility is promoting the spread of the virus is just one of them.

Arenas argues that having mobility data is crucial as "it tells you the time you have before the infection spreads from one place to another".

"Air-traffic data could have told us when the pandemic would arrive to Spain from China. But nobody was ready."

Being prepared is now more important than ever. In this regard, the Catalan government's Marco stresses that any epidemiologist will be able to use the tools developed at the observatory. He is convinced that digital tools can help, even though they're not the only solution.

According to Professor Arenas: "We need models on how epidemics evolve, and data is crucial in adjusting these models. But making predictions on the next pandemic is highly difficult, even with AI."

He advocates rapid testing methods, even if some scientists challenge their accuracy, as they could be provide a useful alternative to PCR (polymerase chain reaction) tests, which also have limitations. He also recommends the use of a contact-tracing app like the Spanish Radar COVID, based on the DP3T decentralized protocol.

"A person can trace up to three contacts over the phone. The app enables you to increase that number to six to eight contacts," he says.

SEE:Coronavirus: Business and technology in a pandemic

Oriol Mitj, researcher and consultant physician in infectious diseases at the Hospital Germans Trias i Pujol, agrees that Bluetooth technology can be helpful. But of course, "We should still fight against the idea that it's an app to control the population, because it's not," says Arenas.

Other countries, like Germany, Ireland and Switzerland, have taken the view that if there is any chance of an app making even a small contribution to the battle against the virus, it is worth a go.

Marc Torrent, director of the CIDAI, argues that being able to combine reliable data and epidemiological expertise to improve the management of public resources is already a victory.

The Catalan government has created a public-private health observatory to bring together the efforts and data from a number of bodies fighting COVID.

See the rest here:

AI may not predict the next pandemic, but big data and machine learning can fight this one - ZDNet

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -…

Posted: at 3:50 am


without comments

Qualcomm

The report also inspects the financial standing of the leading companies, which includes gross profit, revenue generation, sales volume, sales revenue, manufacturing cost, individual growth rate, and other financial ratios.

Research Objective:

Our panel of trade analysts has taken immense efforts in doing this group action in order to produce relevant and reliable primary & secondary data regarding the Machine Learning Artificial intelligence market. Also, the report delivers inputs from the trade consultants that will help the key players in saving their time from the internal analysis. Readers of this report are going to be profited with the inferences delivered in the report. The report gives an in-depth and extensive analysis of the Machine Learning Artificial intelligence market.

The Machine Learning Artificial intelligence Market is Segmented:

In market segmentation by types of Machine Learning Artificial intelligence, the report covers-

This Machine Learning Artificial intelligence report umbrellas vital elements such as market trends, share, size, and aspects that facilitate the growth of the companies operating in the market to help readers implement profitable strategies to boost the growth of their business. This report also analyses the expansion, market size, key segments, market share, application, key drivers, and restraints.

Machine Learning Artificial intelligence Market Regional Analysis:

Geographically, the Machine Learning Artificial intelligence market is segmented across the following regions:North America, Europe, Latin America, Asia Pacific, and Middle East & Africa.

Key Coverage of Report:

Key insights of the report:

In conclusion, the Machine Learning Artificial intelligence Market report provides a detailed study of the market by taking into account leading companies, present market status, and historical data to for accurate market estimations, which will serve as an industry-wide database for both the established players and the new entrants in the market.

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

Original post:

Machine Learning Artificial intelligence Market Size and Growth By Leading Vendors, By Types and Application, By End Users and Forecast to 2020-2027 -...

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning


Page 21«..10..20212223..30..»



matomo tracker