Page 402«..1020..401402403404..410420..»

Machine learning predicts who will win "The Bachelor" – Big Think

Posted: May 5, 2022 at 1:44 am


First airing in 2002, The Bachelor is a titan in the world of Reality TV and has kept its most loyal viewers hooked for a full 26 seasons. To the uninitiated, the show follows 30 female contestants as they battle for the heart of a lone male bachelor, who proposes to the winner.

The contest begins the moment the women step out of a limo to meet the lead on Night One which culminates in him handing the First Impression Rose to the lady with whom he had the most initial chemistry. Over eight drama-fuelled weeks, the contestants travel to romantic destinations for their dates. At the end of each week, the lead selects one or two women for a one-on-one date, while eliminating up to five from the competition.

As self-styled mega-fans of The Bachelor, Abigail Lee and her colleagues at the University of Chicagos unofficial Department of Reality TV Engineering have picked up on several recurring characteristics in the women who tend to make it further in the competition. Overall, younger, white contestants are far more likely to succeed, with just one 30-something and one woman of color winning the leads heart in The Bachelors 20-year history a long-standing source of controversy.

The researchers are less clear on how other factors affect the contestants chances of success, such as whether they receive the First Impression Rose or are selected earlier for their first one-on-one date. Hometown and career also seem to have an unpredictable influence, though contestants with questionable job descriptions like Dog Lover, Free Spirit, and Chicken Enthusiast have rarely made it far.

For Lees team, such a diverse array of contestant parameters makes the show ripe for analysis with machine learning. In their study, Lees team compiled a dataset of contestant parameters that included all 422 contestants who participated in seasons 11 through 25. The researchers obviously encountered some adversity, as they note that they consum[ed] multiple glasses of wine per night during data collection.

Despite this setback, they used the data to train machine learning algorithms whose aim was to predict how far a given contestant will progress through the competition given her characteristics. In searching for the best algorithm, the team tried neural networks, linear regression, and random forest classification.

While the teams neural network performed the best overall in predicting the parameters of the most successful contestants, all three models were consistent with each other. This allowed them to confidently predict the characteristics of a woman with the highest probability of progressing far through the contest: 26 years of age, white, from the Northwest, works as a dancer, received her first one-on-one date on week 6, and didnt receive the First Impression Rose.

Lees team laments that The Bachelors viewership has steadily declined over the past few seasons. They blame a variety of factors, including influencer contestants (who are more concerned with growing their online following than finding true love) and the production crew increasingly meddling in the shows storylines, such as the infamous Champagne-gate of season 24.

By drawing on the insights gathered through their analysis, which the authors emphasize was done in their free time, the researchers hope that The Bachelors producers could think of new ways to shake up its format, while improving chances for contestants across a more diverse range of backgrounds, ensuring the show remains an esteemed cultural institution for years to come.

Of course, as a consolation prize, theres always Bachelor in Paradise.

Go here to read the rest:

Machine learning predicts who will win "The Bachelor" - Big Think

Written by admin |

May 5th, 2022 at 1:44 am

Posted in Machine Learning

Are machine-learning tools the future of healthcare? – Cosmos

Posted: at 1:43 am


Terms like machine learning, artificial intelligence and deep learning have all become science buzzwords in recent years. But can these technologies be applied to saving lives?

The answer to that is a resounding yes. Future developments in health science may actually depend on integrating rapidly growing computing technologies and methods into medical practice.

Cosmos spoke with researchers from the University of Pittsburgh, in Pennsylvania, US, who have just published a paper in Radiology on the use of machine-learning techniques to analyse large data sets from brain trauma patients.

Co-lead author Shandong Wu, associate professor of radiology, is an authority on the use of machine learning in medicine. Machine-learning techniques have been around for several decades already, he explains. But it was in about 2012 that the so-called deep learning technique became mature. It attracted a lot of attention from the research field not only in medicine or healthcare, but in other domains, such as self-driving cars and robotics.

More on machine learning: Machine learning for cancer screening

So, what is deep learning? Its a kind of multi-layered, neural network-based model that is constantly mimicking how the human brain works to process a large set of data to learn or distill information, explains Wu.

The key to the increased maturity of machine-learning techniques in recent years is due to three interrelated developments, he says. These are the technical improvements in the algorithms of machine learning; the developments in the hardware being used, such as the improved graphical processing units; and the large volumes of digitised data readily available.

That data is key. Lots of it.

Get an update of science stories delivered straight to your inbox.

Machine-learning techniques use data to train the model to function better, and the more data the better. If you only have a small set of data, then you dont have a very good model, Wu explains. You may have very good questioning or good methodology, but youre not able to get a better model, because the model learns from lots of data.

Even though the available medical data is not as large as, say, social media data, there is still plenty to work with in the clinical domain.

Machine-learning models and algorithms can inform clinical decision-making, rapidly analysing massive amounts of data to identify patterns, says the papers other co-lead author, David Okonkwo.

Human beings can only process so much information. Machine learning permits orders of magnitude more information available than what an individual human can process, Okonkwo adds.

Okonkwo, a professor of neurological surgery, focuses on caring for patients with brain and spinal cord injuries, particularly those with traumatic brain injuries.

Our goal is to save lives, says Okonkwo. Machine-learning technologies will complement human experience and wisdom to maximise the decision-making for patients with serious injuries.

Even though today you dont see many examples, this will change the way that we practise medicine. We have very high hopes for machine learning and artificial intelligence to change the way that we treat many medical conditions from cancer, to making pregnancy safer, to solving the problems of COVID.

But important safeguards must be put in place. Okonkwo explains that institutions such as the US Food and Drugs Administration (FDA) must ensure that these new technologies are safe and effective before being used in real life-or-death scenarios.

Wu points out that the FDA has already approved about 150 artificial intelligence or machine learning-based tools. Tools need to be further developed or evaluated or used with physicians in the clinical settings to really examine their benefit for patient care, he says. The tools are not there to replace your physician, but to provide the tools and information to better inform physicians.

Read the original post:

Are machine-learning tools the future of healthcare? - Cosmos

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

The race to digitization in logistics through machine learning – FreightWaves

Posted: at 1:43 am


A recent Forbes article highlighted the importance of increasing digital transformation in logistics and argued that many tech leaders should be adopting tech-forward thinking, execution and delivery in order to deliver with speed and keep a laser focus on the customer.

Since the COVID-19 pandemic, and even before, many logistics companies have been turning to technology to streamline their processes. For many, full digitization across the supply chain is the ultimate goal.

Despite many already taking steps toward advancing digitization efforts across supply chains, these processes are still fragmented due to all the moving parts and sectors of the industry such as integrators, forwarders and owners and the processes they each use.

Scale AI is partnering with companies in the logistics industry to better automate processes across the board and eliminate bottlenecks by simplifying integration, commercial invoicing, document processing and more through machine learning (ML).

ML is a subfield of artificial intelligence that allows applications to predict outcomes without having to be specifically programmed to do so.

The logistics industry has historically depended on lots of paperwork and this continues to be a bottleneck today. Many companies already use technology like optical character recognition (OCR) or template-based intelligent document processing (IDP). Both of these are substandard systems that can process raw data but require human key entry or engineers to make the data usable through creating and maintaining templates. This is costly and cannot be scaled easily. In a world where the end users are moving to getting results instantly and at a high quality, these methods take too long while providing low accuracy.

In the industry of logistics, it is a race to digitization to create a competitive edge, said Melisa Tokmak, General Manager of Document AI at Scale. Trying to use regular methods that require templates and heavily rely on manual key entry is not providing a good customer experience or accurate data quickly. This is making companies lose customer trust while missing out on the ROI machine learning can give them easily.

Scales mission is to accelerate the development of artificial intelligence.

Scale builds ML models and fine-tunes them for customers using a small sample of their documents. Its this method that removes the need for templates and allows all documents to be processed accurately within seconds, without human intervention. Tokmak believes that the logistics industry needs this type of technology now more than ever.

In the market right now, every consumer wants things faster, better and cheaper. It is essential for logistics companies to be able to serve the end user better, faster, and cheaper. That means meeting [the end users] where they are, Tokmak said. This change is already happening, so the question is how can you as a company do this faster than others so that you are early in building competitive edge?

Rather than simply learning where on a document to find a field, Scales ML models are capable of understanding the layout, hierarchy and meaning of every field of the document.

Document AI is also flexible to layout changes, table boundaries and other irregularities compared to that of traditional template-based systems.

Tokmak believes that because the current technology of OCR and IDP are not be getting the results needed by companies in the industry, the next step is partnering with companies, like Scale, to incorporate ML into their processes. After adopting this technology, Tokmak added that this can lead to companies knowing more about the market and getting visibility on global trade, which can lead to building new relevant tech.

Flexport, a recognizable name in the logistics industry and customer of Scale AI, is what is referred to as a digital forwarder. Digital forwarders are companies that digitally help customers through the whole shipment process without owning anything themselves. They function as a tech platform to make global trade easy, looking end to end to bring both sides of the marketplace together and ship more easily.

Before integrating an ML-solution, Flexport struggled to make more traditional means of data extraction like template-based and error-prone OCR work. Knowing its expertise was in logistics, Flexport partnered with Scale AI, an expert in ML, to reach its mission of making global trade easy and accessible for everyone more quickly, efficiently, and accurately. Now Flexport prides itself in its ability to process information more quickly and without human intervention.

As the supply chain crisis worsened, Flexports needs evolved. It became increasingly important for Flexport to extract estimated times of arrival (ETAs) to provide end users more visibility. Scales Document AI solution accommodated these changing requirements to extract additional fields in seconds and without templates from unstructured documents by retraining the ML models, providing more visibility on global trade at a time when many were struggling to get this level of insight at all.

According to a recent case study, Flexport has more than 95% accuracy with no templates and a less than 60-second turnaround since partnering with Scale.

Tokmak believes that in the future, companies ideally should have technology that functions as a knowledge graph a graph that represents things like objects, events, situations or concepts and illustrates the relationship among them to make business decisions accurately and fast. As it pertains to the logistics industry, Tokmak defines it as a global trade knowledge graph, which would provide information on where things are coming and going and how things are working, sensors all coming together to deliver users the best experience in the fastest way possible.

Realistically this will take time to fully incorporate and will require partnership from the logistics companies. The trick to enabling this future is starting with what will bring the best ROI and what will help your company find the easiest way to build new cutting edge products immediately, Tokmak said. There is a lot ML can achieve in this area without being very hard to adopt. Document processing is one of them a problem not solved with existing methods but can be solved with machine learning. It is a high value area with benefits of reducing costs, reducing delays, and bringing one source of truth for organizations within the company to operate with.

Tokmak stated that many in the industry have been disappointed with previous methods and were afraid to switch to ML for the same fear of disappointment but that has changed quickly in the last a few years. Companies do understand ML is different and they need to get on this train fast to actualize the gains form the technology.

It is so important to show people the power of ML and how every industry is getting reshaped with ML, Tokmak said. The first adopters are the winners.

The leading voices in supply chain are coming to Rogers, Arkansas, on May 9-10.

*limited term pricing available.

Continued here:

The race to digitization in logistics through machine learning - FreightWaves

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Machine learning-based prediction of relapse in rheumatoid arthritis patients using data on ultrasound examination and blood test | Scientific Reports…

Posted: at 1:43 am


Smolen, J. S., Aletaha, D. & McInnes, I. B. Rheumatoid arthritis. Lancet 388, 20232038 (2016).

CAS Article Google Scholar

Goekoop-Ruiterman, Y. P. & Huizinga, T. W. Rheumatoid arthritis: Can we achieve true drug-free remission in patients with RA?Nat. Rev. Rheumatol. 6, 6870 (2010).

Article Google Scholar

Aga, A. B. et al. Time trends in disease activity, response and remission rates in rheumatoid arthritis during the past decade: Results from the NOR-DMARD study 20002010. Ann. Rheum. Dis. 74, 381388 (2015).

CAS Article Google Scholar

van der Helm-van Mil, A. H. Risk estimation in rheumatoid arthritis: From bench to bedside. Nat. Rev. Rheumatol. 10, 171180 (2014).

Article Google Scholar

Ohrndorf, S. & Backhaus, M. Advances in sonographic scoring of rheumatoid arthritis. Ann. Rheum. Dis. 72, ii69ii75 (2013).

Article Google Scholar

Scir, C. A. et al. Ultrasonographic evaluation of joint involvement in early rheumatoid arthritis in clinical remission: Power Doppler signal predicts short-term relapse. Rheumatology (Oxford) 48, 10921097 (2009).

Article Google Scholar

Peluso, G. et al. Clinical and ultrasonographic remission determines different chances of relapse in early and long standing rheumatoid arthritis. Ann. Rheum. Dis. 70, 172175 (2011).

Article Google Scholar

Foltz, V. et al. Power Doppler ultrasound, but not low-field magnetic resonance imaging, predicts relapse and radiographic disease progression in rheumatoid arthritis patients with low levels of disease activity. Arthritis Rheum. 64, 6776 (2012).

Article Google Scholar

Iwamoto, T. et al. Prediction of relapse after discontinuation of biologic agents by ultrasonographic assessment in patients with rheumatoid arthritis in clinical remission: High predictive values of total gray-scale and power Doppler scores that represent residual synovial inflammation before discontinuation. Arthritis Care Res. 66, 15761581 (2014).

Article Google Scholar

Nguyen, H. et al. Prevalence of ultrasound-detected residual synovitis and risk of relapse and structural progression in rheumatoid arthritis patients in clinical remission: A systematic review and meta-analysis. Rheumatology (Oxford) 53, 21102118 (2014).

Article Google Scholar

Kawashiri, S. Y. et al. Ultrasound-detected bone erosion is a relapse risk factor after discontinuation of biologic disease-modifying antirheumatic drugs in patients with rheumatoid arthritis whose ultrasound power Doppler synovitis activity and clinical disease activity are well controlled. Arthritis Res. Ther. 19, 108 (2017).

Article Google Scholar

Matsuo, H. et al. Prediction of recurrence and remission using superb microvascular imaging in rheumatoid arthritis. J. Med. Ultrason. (2001)47, 131138 (2020).

Article Google Scholar

Matsuo, H. et al. Positive rate and prognostic significance of the superb microvascular imaging signal in joints of rheumatoid arthritis patients in remission with normal C-reactive protein levels and erythrocyte sedimentation rates. J. Med. Ultrason. (2001) 48, 353359 (2021).

Article Google Scholar

Ngiam, K. Y. & Khor, I. W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 20, e262e273 (2019).

Article Google Scholar

Goecks, J., Jalili, V., Heiser, L. M. & Gray, J. W. How machine learning will transform biomedicine. Cell 181, 92101 (2020).

CAS Article Google Scholar

Kingsmore, K. M., Puglisi, C. E., Grammer, A. C. & Lipsky, P. E. An introduction to machine learning and analysis of its use in rheumatic diseases. Nat. Rev. Rheumatol. 17, 710730 (2021).

Article Google Scholar

Stafford, I. S. et al. A systematic review of the applications of artificial intelligence and machine learning in autoimmune diseases. NPJ Digit. Med. 3, 30 (2020).

CAS Article Google Scholar

Luque-Tvar, M. et al. Integrative clinical, molecular, and computational analysis identify novel biomarkers and differential profiles of anti-TNF response in rheumatoid arthritis. Front. Immunol. 12, 631662 (2021).

Article Google Scholar

Kalweit, M. et al. Personalized prediction of disease activity in patients with rheumatoid arthritis using an adaptive deep neural network. PLoSOne 16, e0252289 (2021).

CAS Article Google Scholar

Yoosuf, N. et al. Early prediction of clinical response to anti-TNF treatment using multi-omics and machine learning in rheumatoid arthritis. Rheumatology (Oxford) https://doi.org/10.1093/rheumatology/keab521 (2021).

Article Google Scholar

Vodencarevic, A. et al. Advanced machine learning for predicting individual risk of flares in rheumatoid arthritis patients tapering biologic drugs. Arthritis Res. Ther. 23, 67 (2021).

CAS Article Google Scholar

Koo, B. S. et al. Machine learning model for identifying important clinical features for predicting remission in patients with rheumatoid arthritis treated with biologics. Arthritis Res. Ther. 23, 178 (2021).

CAS Article Google Scholar

Johansson, F. D. et al. Predicting response to tocilizumab monotherapy in rheumatoid arthritis: A real-world data analysis using machine learning. J. Rheumatol. 48, 13641370 (2021).

CAS Article Google Scholar

van der Maaten, L. J. P. & Hinton, G. E. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 25792605 (2008).

MATH Google Scholar

Karlsson Sundbaum, J. et al. Methotrexate treatment in rheumatoid arthritis and elevated liver enzymes: A long-term follow-up of predictors, surveillance, and outcome in clinical practice. Int. J. Rheum. Dis. 22, 12261232 (2019).

CAS Article Google Scholar

Chen, Y., Yu, Z., Packham, J. C. & Mattey, D. L. Influence of adult height on rheumatoid arthritis: Association with disease activity, impairment of joint function and overall disability. PLoSOne 8, e64862 (2013).

ADS Article Google Scholar

Zhao, Y. et al. Ensemble learning predicts multiple sclerosis disease course in the SUMMIT study. NPJ Digit. Med. 3, 135 (2020).

Article Google Scholar

Morid, M. A., Lau, M. & Del Fiol, G. Predictive analytics for step-up therapy: Supervised or semi-supervised learning?. J. Biomed. Inform. 119, 103842 (2021).

Article Google Scholar

Fiorentino, M. C. et al. A deep-learning framework for metacarpal-head cartilage-thickness estimation in ultrasound rheumatological images. Comput. Biol. Med. 141, 105117 (2022).

Article Google Scholar

Rohrbach, J., Reinhard, T., Sick, T. & Drr, O. Bone erosion scoring for rheumatoid arthritis with deep convolutional neural networks. Comput. Electr. Eng. 78, 472481 (2019).

Article Google Scholar

Naredo, E. et al. Ultrasound joint inflammation in rheumatoid arthritis in clinical remission: How many and which joints should be assessed?. Arthritis Care Res. (Hoboken) 65, 512517 (2013).

Article Google Scholar

Backhaus, M. et al. Guidelines for musculoskeletal ultrasound in rheumatology. Ann. Rheum. Dis. 60, 641649 (2001).

CAS Article Google Scholar

Szkudlarek, M. et al. Interobserver agreement in ultrasonography of the finger and toe joints in rheumatoid arthritis. Arthritis Rheum. 48, 955962 (2003).

Article Google Scholar

Breiman, L. Random forests. Mach. Learn. 45, 532 (2001).

Article Google Scholar

Chen, T. & Carlos, G. XGBoost: A Scalable Tree Boosting System. KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 785794. https://doi.org/10.1145/2939672.2939785 (2016).

Go here to read the rest:

Machine learning-based prediction of relapse in rheumatoid arthritis patients using data on ultrasound examination and blood test | Scientific Reports...

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

How to create fast and reproducible machine learning models with steppy? – Analytics India Magazine

Posted: at 1:43 am


In machine learning procedures, making pipelines and extracting the best out of them is very crucial nowadays. We can understand that for a library to provide all the best services is difficult and even if they are providing such high-performing functions then they become heavy-weighted. Steppy is a library that tries to build an optimal pipeline but it is a lightweight library. In this article, we are going to discuss the steppy library and we will look at its implementation for a simple classification problem. The major points to be discussed in the article are listed below.

Lets start with introducing the steppy.

Steppy is an open-source library that can be used for performing data science experiments developed using the python language. The main reason behind developing this library is to make the procedure of experiments fast and reproducible. Along with this, it is a lightweight library and enables us to make high-performing machine learning pipelines. Developers of this library aim to make data science practitioners focused on the data side instead of focusing on issues regarding software development.

In the above section, we have discussed what steppy is and by looking at such points we can say this library can provide an environment where the experiments are fast, reproducible, and easy. With these capabilities, this library also helps in removing the difficulties with reproducibility and provides functions that can also be used by beginners. This library has two main abstractions using which we can make machine learning pipelines. Abstractions are as follows:

Any simple implementation can make the intentions behind the development of this library clear but before all this, we need to install this library that requires Python 3.5 or above in the environment. If we have it we can install this library using the following lines of codes:

After installation, we are ready to use steppy for data science experiments. Lets take a look at a basic implementation.

In this implementation of steppy, we will look at how we can use it for creating steps in a classification task.

In this article we are going to sklearn provided iris dataset that can be imported using the following lines of codes:

from sklearn.datasets import load_iris

Lets split the dataset into train and test.

One thing that we need to perform while using steppy is to put our data into dictionaries so that the step we are going to create can communicate with each other. We can do this in the following way:

Now we are ready to create steps.

In this article, we are going to fit a random forest algorithm to classify the iris data which means for steppy we are defining random forest as a transformer.

Here we have defined some of the functions that will help in initializing random forest, fitting and transforming data, and saving the parameters. Now we can fit the above transformer into the steps in the following ways:

Output:

Lets visualize the step.

step

Output:

Here we can see what are the step we have defined in the pipeline lets train the pipeline.

We can train our defined pipeline using the following lines of codes.

Output:

In the output, we can see that what is the step has been followed to train the pipeline. Lets evaluate the pipeline with test data.

Output:

Here we can see the testing procedure followed by the library. Lets check the accuracy of the model.

Output:

Here we can see the results are good and also if you will use it anytime you will find out how light this library is.

In this article, we have discussed the steppy library which is an open-source, lightweight and easy way to implement machine learning pipelines. Along with this, we also looked at the need for such a library and implementation to create steps in a pipeline using a steppy library.

Read this article:

How to create fast and reproducible machine learning models with steppy? - Analytics India Magazine

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

New machine learning maps the potentials of proteins – Nanowerk

Posted: at 1:43 am


May 04, 2022(Nanowerk News) The biotech industry is constantly searching for the perfect mutation, where properties from different proteins are synthetically combined to achieve a desired effect. It may be necessary to develop new medicaments or enzymes that prolong the shelf-life of yogurt, break down plastics in the wild, or make washing powder effective at low water temperature.New research from DTU Compute and the Department of Computer Science at the University of Copenhagen (DIKU) can in the long term help the industry to accelerate the process. In the journal Nature Communications ("Learning meaningful representations of protein sequences"), the researchers explainhow a new way of using Machine Learning (ML) draws a map of proteins, that makes it possible to appoint a candidate list of the proteins that you need to examine more closely.The illustration depicts an example of the shortest path between two proteins, considering the geometry of the graphing. By defining distances in this way, it is possible to achieve biologically more precise and robust conclusions.(Image: W. Boomsma, N. S. Detlefsen, S. Hauberg)In recent years, we have started to use Machine Learning to form a picture of permitted mutations in proteins. The problem is, however, that you get different images depending on what method you use, and even if you train the same model several times, it can provide different answers about how the biology is related."In our work, we are looking at how to make this process more robust, and we are showing that you can extract significantly more biological information than you have previously been able to. This is an important step forward in order to be able to explore the mutation landscape in the hunt for proteins with special properties," says Postdoc Nicki Skafte Detlefsen from the Cognitive Systems section at DTU Compute.The map of the proteinsA protein is a chain of amino acids, and a mutation occurs when just one of these amino acids in the chain is replaced with another. As there are 20 natural amino acids, this means that the number of mutations increases so quickly that it is completely impossible to study them all. There are more possible mutations than there are atoms in the universe, even if you look at simple proteins. It is not possible to test everything in an experimental manner, so you must be selective about which proteins you want to try to produce synthetically.The researchers from DIKU and DTU Compute have used their ML model to generate a picture of how the proteins are linked. By presenting the model for many examples of protein sequences, it learns to draw a card with a dot for each protein so that closely related proteins are placed close to each other while distantly related proteins are placed far from each other.The ML model is based on mathematics and geometry developed to draw maps. Imagine that you must make a map of the globe. If you zoom in on Denmark, you can easily draw a map on a piece of paper that preserves the geography. But if you must draw the earth, mistakes will occur because you stretch the globe, so that the Arctic becomes a long country instead of a pole. So, on the map, the earth is distorted. For this reason, research in map-making has developed a lot of mathematics that describe the distortions and compensate for the distortions on the map.This is exactly the theory that DIKU and DTU Compute have been able to expand to cover their Machine Learning model (deep learning) for proteins. Because they have mastered the distortion on the map, they can also compensate for it."It enables us to talk about what a sensible distance target is between proteins that are closely related, and then we can suddenly measure it. In this way, we can draw a path through the map of the proteins that tells us which way we expect a protein to develop from to another i.e. mutated, since they are all related to evolution. In this way, the ML model can measure a distance between the proteins and draw optimal paths between promising proteins," says Wouter Boomsma, Associate Professor in the section for Machine Learning at DIKU.The researchers have tested the model on data from numerous proteins that are found in nature, where their structure is known, and they can see that the distance between proteins starts to correspond to the evolutionary development of the proteins, so that proteins that are close to each other evolutionally are placed close to each other."We are now able to put two proteins on the map and draw the curve between them. On the path between the two proteins are possible proteins, which have closely related properties. This is no guarantee, but it provides an opportunity to have a hypothesis about which proteins it could be that the biotech industry ought to test when new proteins are designed," says Sren Hauberg, professor in the Cognitive Systems section at DTU Compute.The unique collaboration between DTU Compute and DIKU was established through a new centre for Machine Learning in Life Sciences (MLLS), which started last year with the support of the Novo Nordisk Foundation. In the centre, researchers in artificial intelligence from both universities are working together to solve the fundamental problems in Machine Learning driven by important issues within the field of biology.The developed protein maps are part of a large-scale project that spans from basic research to industrial applications, e.g. in collaboration with Novozymes and Novo Nordisk.

Read the original:

New machine learning maps the potentials of proteins - Nanowerk

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Deep learning is bridging the gap between the digital and the real world – VentureBeat

Posted: at 1:43 am


We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Algorithms have always been at home in the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning facilitates AIs leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.

To traditional AI specialists, deep learning (DL) is old hat. It got its breakthrough in 2012 when Alex Krizhevsky successfully deployed convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. Its neural networks that have allowed computers to see, hear and speak. DL is the reason we can talk to our phones and dictate emails to our computers. Yet DL algorithms have always played their part in the safe simulated environment of the digital world. Pioneer AI researchers are working hard to introduce deep learning to our physical, three-dimensional world. Yep, the real world.

Deep learning could do much to improve your business, whether you are a car manufacturer, a chipmaker or a farmer. Although the technology has matured, the leap from the digital to the physical world has proven to be more challenging than many expected. This is why weve been talking about smart refrigerators doing our shopping for years, but no one actually has one yet. When algorithms leave their cozy digital nests and have to fend for themselves in three very real and raw dimensions there is more than one challenge to be overcome.

The first problem is accuracy. In the digital world, algorithms can get away with accuracies of around 80%. That doesnt quite cut it in the real world. If a tomato harvesting robot sees only 80% of all tomatoes, the grower will miss 20% of his turnover, says Albert van Breemen, a Dutch AI researcher who has developed DL algorithms for agriculture and horticulture in The Netherlands. His AI solutions include a robot that cuts leaves of cucumber plants, an asparagus harvesting robot and a model that predicts strawberry harvests. His company is also active in the medical manufacturing world, where his team created a model that optimizes the production of medical isotopes. My customers are used to 99.9% accuracy and they expect AI to do the same, Van Breemen says. Every percent of accuracy loss is going to cost them money.

To achieve the desired levels, AI models have to be retrained all the time, which requires a flow of constantly updated data. Data collection is both expensive and time-consuming, as all that data has to be annotated by humans. To solve that challenge Van Breemen has outfitted each of his robots with functionality that lets it know when it is performing either well or badly. When making mistakes the robots will upload only the specific data where they need to improve. That data is collected automatically across the entire robot fleet. So instead of receiving thousands of images, Van Breemens team only gets a hundred or so, that are then labeled and tagged and sent back to the robots for retraining. A few years ago everybody said that data is gold, he says. Now we see that data is actually a huge haystack hiding a nugget of gold. So the challenge is not just collecting lots of data, but the right kind of data.

His team has developed software that automates the retraining of new experiences. Their AI models can now train for new environments on their own, effectively cutting out the human from the loop. Theyve also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: Its somewhat paradoxical because you could argue that a model that can annotate photos is the same model I need for my application. But we train our annotation model with a much smaller data size than our goal model. The annotation model is less accurate and can still make mistakes, but its good enough to create new data points we can use to automate the annotation process.

The Dutch AI specialist sees a huge potential for deep learning in the manufacturing industry, where AI could be used for applications like defect detection and machine optimization. The global smart manufacturing industry is currently valued at 198 billion dollars and has a predicted growth rate of 11% until 2025. The Brainport region around the city of Eindhoven where Van Breemens company is headquartered is teeming with world-class manufacturing corporates, such as Philips and ASML. (Van Breemen has worked for both companies in the past.)

A second challenge of applying AI in the real world is the fact that physical environments are much more varied and complex than digital ones. A self-driving car that is trained in the US will not automatically work in Europe with its different traffic rules and signs. Van Breemen faced this challenge when he had to apply his DL model that cuts cucumber plant leaves to a different growers greenhouse. If this took place in the digital world I would just take the same model and train it with the data from the new grower, he says. But this particular grower operated his greenhouse with LED lighting, which gave all the cucumber images a bluish-purple glow our model didnt recognize. So we had to adapt the model to correct for this real-world deviation. There are all these unexpected things that happen when you take your models out of the digital world and apply them to the real world.

Van Breemen calls this the sim-to-real gap, the disparity between a predictable and unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned AI researcher from Stanford and cofounder of Google Brain who also seeks to apply deep learning to manufacturing, speaks of the proof of concept to production gap. Its one of the reasons why 75% of all AI projects in manufacturing fail to launch. According to Ng paying more attention to cleaning up your data set is one way to solve the problem. The traditional view in AI was to focus on building a good model and let the model deal with noise in the data. However, in manufacturing a data-centric view may be more useful, since the data set size is often small. Improving data will then immediately have an effect on improving the overall accuracy of the model.

Apart from cleaner data, another way to bridge the sim-to-real gap is by using cycleGAN, an image translation technique that connects two different domains, made popular by aging apps like FaceApp. Van Breemens team researched cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment, where three simulated cameras observed a simulated robotic arm picking up a simulated object. They then developed a DL algorithm based on cycleGAN that translated the images from the real world (three real cameras observing a real robotic arm picking up a real object) to a simulated image, which could then be used to retrain the simulated model. Van Breemen: A robotic arm has a lot of moving parts. Normally you would have to program all those movements beforehand. But if you give it a clearly described goal, such as picking up an object, it will now optimize the movements in the simulated world first. Through cycleGAN you can then use that optimization in the real world, which saves a lot of man-hours. Each separate factory using the same AI model to operate a robotic arm would have to train its own cycleGAN to tweak the generic model to suit its own specific real-world parameters.

The field of deep learning continues to grow and develop. Its new frontier is called reinforcement learning. This is where algorithms change from mere observers to decision-makers, giving robots instructions on how to work more efficiently. Standard DL algorithms are programmed by software engineers to perform a specific task, like moving a robotic arm to fold a box. A reinforcement algorithm could find out there are more efficient ways to fold boxes outside of their preprogrammed range.

It was reinforcement learning (RL) that made an AI system beat the worlds best Go player back in 2016. Now RL is also slowly making its way into manufacturing. The technology isnt mature enough to be deployed just yet, but according to the experts, this will only be a matter of time.

With the help of RL, Albert Van Breemen envisions optimizing an entire greenhouse. This is done by letting the AI system decide how the plants can grow in the most efficient way for the grower to maximize profit. The optimization process takes place in a simulated environment, where thousands of possible growth scenarios are tried out. The simulation plays around with different growth variables like temperature, humidity, lighting and fertilizer, and then chooses the scenario where the plants grow best. The winning scenario is then translated back to the three-dimensional world of a real greenhouse. The bottleneck is the sim-to-real gap, Van Breemen explains. But I really expect those problems to be solved in the next five to ten years.

As a trained psychologist I am fascinated by the transition AI is making from the digital to the physical world. It goes to show how complex our three-dimensional world really is and how much neurological and mechanical skill is needed for simple actions like cutting leaves or folding boxes. This transition is making us more aware of our own internal, brain-operated algorithms that help us navigate the world and which have taken millennia to develop. Itll be interesting to see how AI is going to compete with that. And if AI eventually catches up, Im sure my smart refrigerator will order champagne to celebrate.

Bert-Jan Woertman is the director of Mikrocentrum.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the article here:

Deep learning is bridging the gap between the digital and the real world - VentureBeat

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

IonQ And Hyundai Steer Its Partnership Toward Quantum Machine Learning To Recognize Traffic Signs And 3D Objects – Forbes

Posted: at 1:43 am


IonQ and Hyundai have partnered to apply Quantum ML to automotive recognition of traffic signals and ... [+] other objects.

Automobile manufacturers, suppliers, dealers, and service providers understand that quantum computing will eventually have a major impact on most every aspect of the industry. Daimler, Honda, Hyundai, Ford, BMW, Volkswagen, and Toyota all have some form of quantum evaluation program in place.

Classic computers cannot solve many of the significant real-world problems because of computational complexity or because calculations would take an inordinate amount of time, perhaps hundreds, thousands, or even millions of years. Quantum computing offers the potential to solve these problems in a reasonable amount of time. Although current hardware isnt advanced enough to support the number of qubits needed, we are already working to implement error correction solutions in order to build fault-tolerant quantum machines.

The same hardware and error correction constraints limit the full potential of quantum machine learning. In some instances, it has proven to be helpful with current quantum computers; it can also exceed the results of some classical models.

IonQ has a research history with quantum machine learning, so I was looking forward to talking to Peter Chapman, CEO of IonQ, about his partnership with Hyundai Motors.

First, Chapman explained that the partnerships goal is to determine quantum computings potential to provide improved mobility solutions for autonomous vehicles. For these projects, IonQ will use Aria, its latest trapped-ion quantum computer.

IonQ combined its quantum computing expertise with Hyundai's lithium battery knowledge two months ago. It is developing sophisticated quantum chemistry simulations to study battery charge and discharge cycles, capacity, durability, and safety.

As an evolution of their relationship, the IonQ and Hyundai team will develop quantum machine learning (QML) models to detect and recognize traffic signs and identify 3D objects such as pedestrians and cyclists.

Recognizing traffic signs and identifying 3D objects are critical elements of Advanced Driver-Assistance Systems (ADAS) used by autonomous vehicles. ADAS depends upon cameras, lidar, radar, and other sensors for inputs to onboard AV computers that interpret and respond to the driving environment. A 2016 study by the National Highway Transportation Safety Administration found that 94% to 96% of accidents are caused by human error. With quantum enhanced inputs for ADAS, it is likely that human error can be minimized to reduce accidents.

Early in his career, Chapman served as president of a Ray Kurzweil company, where he gained machine learning experience. As a result, he has a deep knowledge of classical machine learning models and the complicated steps needed to identify images. More importantly, he understands why QML will be much faster and more efficient than its classical counterpart.

QML doesn't need numerous processing steps for traffic road sign recognition like classical approaches to object detection, he said. Quantum recognizes a sign and interprets its meaning in one single step.

IonQ has already completed the difficult computational part of the road sign recognition project. It has already trained quantum machine learning models (QML) using a standardized 50,000 image database to recognize 43 different classifications of road signs. Next, IonQ will test its QML model under real-world driving conditions using Hyundai's test environment.

Chapman also explained why he believes quantum machine learning and object recognition will prove much more powerful than classical.

"What happens if your car sees something that it has never been trained on before? Let's take an outlier case, such as a person with a triple-wide stroller, walking two dogs on a leash, talking on their iPhone, and carrying a bag of groceries. If the training data had never seen this scenario, how would the car respond? I think quantum machine learning will fill in those gaps and provide a known response for things it hasn't seen before."

IonQ Quantum Machine Learning milestones

The following summarizes various QML projects IonQ has participated in over the past few years.

December 2020

September 2021

November 2021

Analyst notes:

1. In October 2021, IonQ became the first pure-play quantum company listed on the New York Stock Exchange.

2. While quantum computing is still in its infancy, it's too early to select a technology that will lead to error-free quantum systems that use millions of qubits to solve world-changing problems. The technology that ultimately performs at that level may not even be in use today. Scaling to millions of logical qubits is still many years away for all gate-based quantum computers.

3. Quantum qubits are fragile and susceptible to errors caused by interaction with the environment. Error correction is a subject of serious research by almost every quantum company. It will not be possible to scale quantum computers to large numbers of qubits until a workable error correction technique is developed. I expect significant progress in 2022.

4. Technical details of the IonQ-FCAT daily stock return study are available here.

5. Technical details of the IonQ-Zapata hybrid QML research are available here.

6. Access to IonQ quantum systems is available through the cloud on Amazon Braket, Microsoft Azure, and Google Cloud and through direct API access.

Follow Paul Smith-Goodson on Twitter for current information and insights on quantum and AI

Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides or has provided research, analysis, advising, and/or consulting to many high-tech companies in the industry. I do not hold any equity positions with any companies cited in this column.

Find more from Moor Insights & Strategy on its website, Twitter, LinkedIn, Facebook, Google+, and YouTube.

See original here:

IonQ And Hyundai Steer Its Partnership Toward Quantum Machine Learning To Recognize Traffic Signs And 3D Objects - Forbes

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

VelocityEHS Dream Team of Board-Certified Ergonomists and AI & Machine Learning Scientists Headline Virtual Ergonomics Conference on May 3 – Yahoo…

Posted: at 1:43 am


VelocityEHS

Addressing musculoskeletal disorders and aligning risk reduction programs to ESG and sustainability efforts top the agenda.

CHICAGO, April 29, 2022 (GLOBE NEWSWIRE) -- VelocityEHS, the global leader in cloud-based environmental, health, safety (EHS) and environmental, social, and corporate governance (ESG) software, announced today it will host a new virtual event The VelocityEHS Ergonomics Conference on May 3, 2022. During this free, one day conference, experts will provide thought leadership on ways to focus on the job improvement process while reducing workplace injuries related to musculoskeletal disorders (MSDs). VelocityEHS expert speakers include board-certified professional ergonomists (CPEs), certified safety professionals (CSPs), certified industrial hygienists (CIHs), a PhD in machine learning, and a doctor in physical therapy.

Additional topics include implementation of best practices, tools for calculating a return on investment, machine learning advancements, physical demands analysis, the impacts of ergonomics on corporations, and insights from experts on the front lines at Cummins, Lear Corporation, Southwire Company and W.L. Gore & Associates.

Register now for the whole conference or to attend specific sessions. Registrants will also have on-demand access to the sessions for 30 days following the live event.

One of the best ways to judge the health, sustainability and vitality of a global enterprise is to look at how seriously they take ergonomics, said John Damgaard, CEO of VelocityEHS. One thing the very best companies in manufacturing, pharmaceuticals, food & beverage, chemical and so on, have in common is the investment they make in ergonomics and designing risk out of their processes. With more CPEs than any other company, and game-changing technology that harnesses AI & machine learning to help non experts achieve expert results, VelocityEHS is the most trusted ergonomics partner of the Fortune 500 and beyond.

Story continues

The free, daylong event features a packed and unparalleled line up of experts and content, with ergonomics insights on a broad-range of topics, including:

Advancing Your Ergonomics Progress10-10:15 a.m. ETPresented by Jamie Mallon, CPE, and Chief Revenue Officer at VelocityEHS. Mallon will share his 25+ years of experience consulting with Fortune 500 companies to help them advance the impact of their ergonomics improvement process, enabling them to identify and design out risk before injury.

Building an Effective Ergonomics Process10:1511 a.m. ETPresented by Kristi Hames, CIH, CSP, Senior Solutions Strategist, VelocityEHS and Christy Lotz, CPE, Director of Ergonomics, VelocityEHS. This workshop covers the key elements of a written ergonomics plan, considerations for establishing your ergonomics team, activities you can perform to enhance stakeholder alignment, and how to select metrics that are aligned with your process maturity and stakeholder objectives.

Process Management and ROI11:1512 p.m. ETPresented by Rick Barker, CPE, CSP, Principal Solutions Strategist, VelocityEHS and Rachel Zoky, CPE, Senior Consultant, VelocityEHS. This session covers ways to sustain a successful ergonomics process by updating your program as it matures.

Quantifying Overall MSD Risk Level: A Panel Discussion with VelocityEHS Customers 12:1512:45 p.m. ETPresented by Blake McGowan, CPE, Director of Ergonomics Research, VelocityEHS; Kevin Perdeaux, CPE, Director of Global Ergonomics, Lear Corporation; and Ryan Goad, CPE, Environmental, Health & Safety Manager, Southwire Company.

A Road Map to ActiveEHS (Customers Only)1-1:30 p.m. ETPresented by Ben Taft, Senior Product Manager, VelocityEHS, this session will explore how ActiveEHS in ergonomics harnesses AI & machine learning along with deep domain expertise from VelocityEHS experts to drive a continuous improvement cycle prediction, intervention and outcomes.

Physical Demands Analysis: The 5 Most Commonly Asked Questions1:45-2:15 p.m. ETPresented by Arielle West, PT, PDT, Solutions Strategist, VelocityEHS, will explore how Physical Demands Analysis (PDA), another tool to manage musculoskeletal disorders is used to match people to job demands.

How Ergonomics and MSD Risk Reduction Efforts Impact Corporate Sustainability Metrics2:30-3:15 p.m. ETPresented by Blake McGowan, CPE, Director of Ergonomics, VelocityEHS, this session will center on the relationship between ergonomics and risk reduction and the necessity for business to up their performance on the ESG and sustainability front.

How Machine Learning is Advancing Ergonomics Effectiveness3:30-4:15 p.m. ETPresented by Dr. Julia Penfield, Ph.D., Principal Machine Learning Scientist, VelocityEHS and Rick Barker, CPE, CSP, Principal Solutions Strategist, VelocityEHS, this session will provide a great understanding of what machine learning is and how it is already being applied to save time and increase the effectiveness in three different EHS use cases.

Determining the Value of New Technology: A Panel Discussion with VelocityEHS Customers 4:30-5 p.m. ETPresented by Blake McGowan, CPE, Director of Ergonomics Research, VelocityEHS; Sarah Grawe, Ergonomics Manager, Cummins; and Michael Mauro, Divisional Ergonomics and Error Proofing Process Owner, W.L. Gore & Associates. This session features a conversation with Cummins and W.L. Gore & Associates on ways to assess the value of new technology.

VelocityEHS virtual events offer attainable learning opportunities to individuals seeking unique perspectives on the common EHS and ESG issues most affecting companies today. Stay up-to-date with current and upcoming conferences, webinars and other learning opportunities by visiting the Webinars & Recordings page and following VelocityEHS on LinkedIn.

The VelocityEHS Industrial Ergonomics solution, now with Active Causes & Controls, is available via the VelocityEHS Accelerate Platform, which delivers best-in-class performance in the areas of health, safety, risk, ESG and operational excellence. Backed by the largest global software community of EHS experts and thought leaders, the software drives expert processes so that every team member can produce outstanding results. For more information about VelocityEHS and its complete award-winning software solutions, visit http://www.EHS.com.

About VelocityEHSRelied on by more than 10 million users worldwide, VelocityEHS is the global leader in true SaaS enterprise EHS technology. Through the VelocityEHS Accelerate Platform, the company helps global enterprises drive operational excellence by delivering best-in-class capabilities for health, safety, environmental compliance, training, operational risk, and environmental, social, and corporate governance (ESG). The VelocityEHS team includes unparalleled industry expertise, with more certified experts in health, safety, industrial hygiene, ergonomics, sustainability, the environment, AI, and machine learning than any EHS software provider. Recognized by the EHS industrys top independent analysts as a Leader in the Verdantix 2021 Green Quadrant AnalysisVelocityEHS is committed to industry thought leadership and to accelerating the pace of innovation through its software solutions and vision.

VelocityEHS is headquartered in Chicago, Illinois, with locations in Ann Arbor, Michigan; Tampa, Florida; Oakville, Ontario; London, England; Perth, Western Australia; and Cork, Ireland. For more information, visit http://www.EHS.com.

Media ContactBrad Harbaugh312.881.2855bharbaugh@ehs.com

Link:

VelocityEHS Dream Team of Board-Certified Ergonomists and AI & Machine Learning Scientists Headline Virtual Ergonomics Conference on May 3 - Yahoo...

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning

Is Link Machine Learning (LML) Heading the Right Direction Monday? – InvestorsObserver

Posted: at 1:43 am


InvestorsObserver gives Link Machine Learning a strong long-term technical score of 94 from its research. The proprietary scoring system take into account the historical trading patterns from recent months to a year of the token's support and resistance levels, in addition to where it is relative to long-term averages. The analysis helps to determine whether it's a strong buy-and-hold investment opportunity currently for traders.LML at this time has a superior long-term technical analysis score than 94% of crytpos in circulation. The Long-Term Rank will be most relevant to buy-and-hold type investors who are looking for strong steady growth when allocating their assets. Combining a high long and short-term technical score will also help portfolio managers discover tokens that have bottomed out.

Subscribe to our daily morning update newsletter and never miss out on the need-to-know market news, movements, and more.

Thank you for signing up! You're all set to receive the Morning Update newsletter

Read the original post:

Is Link Machine Learning (LML) Heading the Right Direction Monday? - InvestorsObserver

Written by admin |

May 5th, 2022 at 1:43 am

Posted in Machine Learning


Page 402«..1020..401402403404..410420..»



matomo tracker