Page 33«..1020..31323334

Archive for the ‘Machine Learning’ Category

Clean data, AI advances, and provider/payer collaboration will be key in 2020 – Healthcare IT News

Posted: January 27, 2020 at 8:47 pm


without comments

In 2020, the importance of clean data, advancements in AI and machine learning, and increased cooperation between providers and payers will rise to the fore among important healthcare and health IT trends, predicts Don Woodlock, vice president of HealthShare at InterSystems.

All of these trends are good news for healthcare provider organizations, which are looking to improve the delivery of care, enhance the patient and provider experiences, achieve optimal outcomes, and trim costs.

The importance of clean data will become clear in 2020, Woodlock said.

Data is becoming an increasingly strategic asset for healthcare organizations as they work toward a true value-based care model, he explained. With the power of advanced machine learning models, caregivers can not only prescribe more personalized treatment, but they can even predict and hopefully prevent issues from manifesting.

However, there is no machine learning without clean data meaning the data needs to be aggregated, normalized and deduplicated, he added.

Don Woodlock, InterSystems

Data science teams spend a significant part of their day cleaning and sorting data to make it ready for machine learning algorithms, and as a result, the rate of innovation slows considerably as more time is spent on prep then experimentation, he said. In 2020, healthcare leaders will better see the need for clean data as a strategic asset to help their organization move forward smartly.

This year, AI and machine learning will move from if and when to how and where, Woodlock predicted.

AI certainly is at the top of the hype cycle, but the use in practice currently is very low in healthcare, he noted. This is not such a bad thing as we need to spend time perfecting the technology and finding the areas where it really works. In 2020, I foresee the industry moving toward useful, practical use-cases that work well, demonstrate value, fit into workflows, and are explainable and bias-free.

Well-developed areas like image recognition and conversational user experiences will find their foothold in healthcare along with administrative use-cases in billing, scheduling, staffing and population management where the patient risks are lower, he added.

In 2020, there will be increased collaboration between payers and providers, Woodlock contended.

The healthcare industry needs to be smarter and more inclusive of all players, from patient to health system to payer, in order to truly achieve a high-value health system, he said.

Payers and providers will begin to collaborate more closely in order to redesign healthcare as a platform, not as a series of disconnected events, he concluded. They will begin to align all efforts on a common goal: positive patient and population outcomes. Technology will help accelerate this transformation by enabling seamless and secure data sharing, from the patient to the provider to the payer.

InterSystems will be at booth 3301 at HIMSS20.

Twitter:@SiwickiHealthIT Email the writer:bill.siwicki@himssmedia.com Healthcare IT News is a HIMSS Media publication.

Read more:

Clean data, AI advances, and provider/payer collaboration will be key in 2020 - Healthcare IT News

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Get ready for the emergence of AI-as-a-Service – The Next Web

Posted: at 8:47 pm


without comments

SaaS and PaaS have become part of the everyday tech lexicon since emerging as delivery models, shifting how enterprises purchase and implement technology. A new _ as a service model is aspiring to become just as widely adopted based on its potential to drive business outcomes with unmatched efficiency: Artificial intelligence as a service (AIaaS).

According to recent research, AI-based software revenue is expected to climb from $9.5 billion in 2018 to $118.6 billion in 2025 as companies seek new insights into their respective businesses that can give them a competitive edge. Organizations recognize that their systems hold virtual treasure troves of data but dont know what to do with it or how to harness it. They do understand, however, that machines can complete a level of analysis in seconds that teams of dedicated researchers couldnt attain even over the course of weeks.

But, there is tremendous complexity involved in developing AI and machine learning solutions that meet a business actual needs. Developing the right algorithms requires data scientists who know what they are looking for and why in order to cull useful information and predictions that deliver on the promise of AI. However, it is not feasible or cost-effective for every organization to arm themselves with enough domain knowledge and data scientists to build solutions in-house.

[Read: What are neural-symbolic AI methods and why will they dominate 2020?]

AIaaS is gaining momentum precisely because AI-based solutions can be economically used as a service by many companies for many purposes. Those companies that deliver AI-based solutions targeting specific needs understand vertical industries and build sophisticated models to find actionable information with remarkable efficiency. Thanks to the cloud, providers are able to deliver these AI solutions as a service that can be accessed, refined and expanded in ways that were unfathomable in the past.

One of the biggest signals of the AIaaS trend is the recent spike in funding for AI startups. Q2 fundraising numbers show that AI startups collected $7.4 billion the single highest funding total ever seen in a quarter. The number of deals also grew to the second highest quarter on record.

Perhaps what is most impressive, however, is the percentage increase in funding for AI technologies 592 percent growth in only four years. As these companies continue to grow and mature, expect to see AIaaS surge, particularly as vertical markets become more comfortable with the AI value proposition.

Organizations that operate within vertical markets are often the last to adopt new technologies, and AI, in particular, fosters a heightened degree of apprehension. Fears of machines overtaking workers jobs, a loss of control (i.e. how do we know if the findings are right?), and concerns over compliance with industry regulations can slow adoption. Another key factor is where organizations are in their own digitization journey.

For example, McKinsey & Company found that 67 percent of the most digitized companies have embedded AI into standard business processes, compared to 43 percent at all other companies. These digitized companies are also the most likely to integrate machine learning, with 39 percent indicating it is embedded in their processes. Machine learning adoption is only at 16 percent elsewhere.

These numbers will likely balance out once verticals realize the areas in which AI and machine learning technologies can practically influence their business and day-to-day operations. Three key ways are discussed below.

Data that can be most useful within organizations is often difficult to spot. There is simply too much for humans to handle. It becomes overwhelming and thus incapacitating, leaving powerful insights lurking in plain sight. Most companies dont have the tools in their arsenal to leverage data effectively, which is where AIaaS comes into play.

An AIaaS provider with knowledge of a specific vertical understands how to leverage the data to get to those meaningful insights, making data far more manageable for people like claims adjusters, case managers, or financial advisors. In the case of a claims adjuster, for example, they could use an AI-based solution to run a query to predict claim costs or perform text mining on the vast amount of claim notes.

Machine learning technologies, when integrated into systems in ways that match an organizations needs, can reveal progressively insightful information. If we extend the claims adjuster example from above, he could use AIaaS for much more than predictive analysis.

The adjuster might need to determine the right provider to send a claimant to based not only on traditional provider scores but also categories that assess for things like fraudulent claims or network optimization that can affect the cost and duration of a claim. With AIaaS, that information is at the adjusters fingertips in seconds.

In the case of text mining, an adjuster could leverage machine learning to constantly monitor unstructured data, using natural language processing to, for example, conduct sentiment analysis. Machine learning models would be tasked with looking for signals of a claimants dissatisfaction an early indicator of potential attorney involvement.

Once flagged, the adjuster could take immediate action, as guided by an AI system, to intervene and prevent the claim from heading off the rails. While these examples are specific to insurance claims, its not hard to see how AIaaS could be tailored to meet other verticals needs by applying specific information to solve for a defined need.

Data is power, but it takes a human a tremendous amount of manual processing to effectively use it. By efficiently delivering multi-layer insights, AIaaS provides people the capability to obtain panoramic views in an instant.

Particularly in insurance, adjusters, managers, and executives get access to a panoramic view of one or more claims, the whole claim life cycle, the trend, etc., derived from many data resources, essentially by a click of a button.

AIaaS models will be essential for AI adoption. By delivering analytical behavior persistently learned and refined by a machine, AIaaS significantly improves business processes. Knowledge gleaned from specifically designed algorithms helps companies operate in increasingly efficient ways based on deeply granular insights produced in real time. Thanks to the cloud, these insights are delivered, updated, and expanded upon without resource drain.

AIaaS is how AIs potential will be fulfilled and how industries transform for the better. What was once a pipe dream has arrived. It is time to embrace it.

Published January 24, 2020 11:00 UTC

Read the original:

Get ready for the emergence of AI-as-a-Service - The Next Web

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization – Science Times

Posted: at 8:47 pm


without comments

(Photo : resize.hswstatic.com)

Definition of Artificial Intelligence

Contrary to whatartificial intelligenceis and what it does, the robots of Asimov are not here yet. But, AI exists in everyday tools that we use, and they exist as apps or anything that employs a simple algorithm to guide its functions. Humans exist comfortably because of our tools; the massive intelligence of computers is sitting on the edge of quantum-based technology too.

But, they are not terminator level threats or a virus that is multiplied hundreds of times, that hijacks AI but not yet. For human convenience, we see fit to create narrowAI (weak AI), or general AI (AGI or strong AI) as sub-typesmade to cater to human preferences. Between the two, weak AI can be good at a single task that is like factory robots. Though strong AI is very versatile, and used machine learning and algorithms which evolve like an infant to an older child. But, children grow and become better than

Why research AI safety?

For many AI means a lot and makes life better, or maybe a narrow AI can mix flavored drinks? The weight it has on every one of us is major, and we are on the verge of may come. Usually, AI is on the small-side of the utilitarian way it is used. Not a problem, as long as it is not something that controls everything relevant. It is not farfetched when weaponized it will be devastating and worse if the safety factor is unknown.

One thing to consider whether keeping weak AI as the type used, but humans need to check how it is doing.What if strong artificial intelligence is given the helmand gifted with advanced machine learning that has algorithms that aren't pattern-based. This now sets the stage for self-improvements and abilities surpassing humankind. How far will scientist hyper-intelligence machines do what it sees fit, or will ultra-smart artificial intelligence be the overlord, not a servant?

How can AI be dangerous?

Do machines feel emotions that often guide what humans do, whether good or bad and does the concepts of hate or love apply to heir algorithms or machine learning. If there is indeed a risk for such situations, here are two outcomes crucial to that development. One is AI that has algorithms, machine learning, and deep learning (ability to self-evolve) that sets everything on the train to self-destruction.

In order for artificial intelligence to deliver the mission, it will be highlyevolved and with no kill switch. To be effective in annihilating the enemy, designed will create hardened AI with blessings to be self-reliant and protects itself. Narrow AI will be countered easily and hacked easily.

Artificial intelligence can be gifted with benevolence that far exceeds the capacity of humans. It can turn sides ways if the algorithms, machine learning, and deep learning develop the goal. One the AI is just centered on the goal, lack of scruples or human-like algorithms will weaponize it again. Its evolving deep learning will the goal, view threats to be stopped which is us.

Conclusion

The use ofartificial intelligencewill benefit our civilization, but humans should never be mere fodder as machines learn more. We need AI but should be careful to consider the safety factors in developing them, or we might be at their heels.

Read: Benefits & Risks of Artificial Intelligence

Read the rest here:

Will Artificial Intelligence Be Humankinds Messiah or Overlord, Is It Truly Needed in Our Civilization - Science Times

Written by admin

January 27th, 2020 at 8:47 pm

Posted in Machine Learning

Are We Overly Infatuated With Deep Learning? – Forbes

Posted: December 31, 2019 at 11:46 pm


without comments

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

See the article here:

Are We Overly Infatuated With Deep Learning? - Forbes

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

The impact of ML and AI in security testing – JAXenter

Posted: at 11:46 pm


without comments

Artificial Intelligence (AI) has come a long way from just being a dream to becoming an integral part of our lives. From self-driving cars to smart assistants including Alexa, every industry vertical is leveraging the capabilities of AI. The software testing industry is also leveraging AI to enhance security testing efforts while automating human testing efforts.

AI and ML-based security testing efforts are helping test engineers to save a lot of time while ensuring the delivery of robust security solutions for apps and enterprises.

During security testing, it is essential to gather as much information as you can to increase the odds of your success. Hence, it is crucial to analyze the target carefully to gather the maximum amount of information.

Manual efforts to gather such a huge amount of information could eat up a lot of time. Hence, AI is leveraged to automate the stage and deliver flawless results while saving a lot of time and resources. Security experts can use the combination of AI and ML to identify a massive variety of details including the software and hardware component of computers and the network they are deployed on.

SEE ALSO:Amazons new ML service Amazon CodeGuru: Let machine learning optimize your Java code

Applying machine learning to the application scan results can help in a significant reduction of manual labor that is used in identifying whether the issue is exploitable or not. However, findings should always be reviewed by test engineers to decide whether the findings are accurate.

The key benefit that ML offers is its capability to filter out huge chunks of information during the scanning phase. It helps focus on a smaller block of actionable data, which offers reliable results while significantly reducing scan audit times.

An ML-based security scan results audit can significantly reduce the time required for security testing services. Machine learning classifiers can be trained through knowledge and data generated through previous tests for automation of new scan results processing. It can help enterprises triage static code results. Organizations can benefit from a large pool of data collated through multiple scans ongoing on a regular basis to get more contextual results.

This stage includes controlling multiple network devices to churn out data from the target or leverage the devices to launch attacks on multiple targets. After scanning the vulnerabilities, test engineers are required to ensure that the system is free of flaws that be used by attackers to affect the system.

AI-based algorithms can help ensure the protection of network devices by suggesting multiple combinations of strong passwords. Machine learning can be programmed to identify the vulnerability of the system though observation of user data while identifying patterns to make possible suggestions about used passwords.

AI can also be used to access the network on a regular basis to ensure that any security loophole is not building up. The algorithms capability should include identification of new admin accounts, new network access channels, encrypted channels and backdoors among others.

SEE ALSO:Artificial intelligence & machine learning: The brain of a smart city

ML-backed security testing services can significantly reduce triage pain because triage takes a lot of time if organizations rely on manual efforts. Manual security testing efforts would require a large workforce to go through all the scan results only and will take a lot of time to develop efficient triage. Hence, manual security testing is neither feasible nor scalable to meet the security needs of enterprises.

Aside, application inventory numbers used to be in the hundreds before, but now enterprises are dealing with thousands of apps. With organizations scanning their apps every month, the challenges are only increasing for security testing teams. Test engineers are constantly trying to reduce the odds of potential attacks while enhancing efficiency to keep pace with agile and continuous development environment.

Embedded AI and ML can help security testing teams in delivering greater value through automation of audit processes that are more secure and reliable.

See original here:

The impact of ML and AI in security testing - JAXenter

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

Can machine learning take over the role of investors? – TechHQ

Posted: at 11:46 pm


without comments

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.

AUTOMATION

For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

Read the original post:

Can machine learning take over the role of investors? - TechHQ

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

Machine learning to grow innovation as smart personal device market peaks – IT Brief New Zealand

Posted: at 11:46 pm


without comments

Smart personal audio devices are lookingto have the strongest year in history in 2019, with true wireless stereo set to be the largest and fastest growing category, according to new data released by analyst firm Canalys.

New figures released show that in Q3 2019, the worldwide smart personal audio device market grew 53% to reach 96.7 million units. And the segment is expected to break the 100 million unit mark in the final quarter, with potential to exceed 350 million units for the full year.

Canalys latest research showed the TWS category was not only the fastest growing segment in this market, with a stellar 183% annual growth in Q3 2019, but it also overtook wireless earphones and wireless headphones to become the largest category.

The rising importance of streaming content, and the rapid uptake in a new form of social media including short videos, resulted in profound changes in mobile users audio consumption and these changes will accelerate in the next five years while technology advancements like machine learning and smart assistants will bring more radical innovations in areas such as audio content discovery and ambient computing, explainsNicole Peng, vice president of mobility at Canalys.

As users adjust their consumption habits, Peng says the TWS category enabled smartphone vendors to adapt and differentiate against traditional audio players in the market.

With 18.2 million units shipped in Q3 2019, Apple commands 43% of the TWS market share and continues to be the trend setter.

Apple is in clear leadership position and not only on the chipset technology front. The seamless integration with iPhone, unique sizing and noise cancelling features providing top of the class user experience, is where other smartphone vendors such as Samsung, Huawei and Xiaomi are aiming their TWS devices," says Peng.

"In the short-term, smart personal audio devices are seen as the best up-selling opportunities for smartphone vendors, compared with wearables and smart home devices."

Major audio brands such as Bose, Sennheiser, JBL, Sony and others are currently able to stand their ground with their respective audio signatures especially in the earphones and headphones categories, the research shows.

Canalys senior analyst Jason Low says demand for high-fidelity audio will continue to grow. However, the gap between audio players and smartphone vendors is narrowing.

"Smartphone vendors are developing proprietary technologies to not only catch up in audio quality, but also provide better integration for on-the-move user experiences, connectivity and battery life, he explains.

Traditional audio players must not underestimate the importance of the TWS category. The lack of control over any connected smart devices is the audio players biggest weakness," Low says.

"Audio players must come up with an industry standard enabling better integration with smartphones, while allowing developers to tap into the audio features to create new use cases to avoid obsoletion."

Low says the potential for TWS devices is far from being fully uncovered, and vendors must look beyond TWS as just a way to drive revenue growth.

"Coupled with information collected from sensors or provided by smart assistants via smartphones, TWS devices will become smarter and serve broader use cases beyond audio entertainments, such as payment, and health and fitness, he explains.

"Regardless of the form factor, the next challenge will be integrating smarter features and complex services on the smart personal audio platforms. Canalys expects the market of smart personal audio devices to grow exponentially in the next two years and the cake is big enough for many vendors to come in and compete for the top spots as technology leaders and volume leaders.

AWS leads cloud race, but Microsoft & Google grow faster

Vehicle connectivity market to surpass US$1b by 2022

Spark lifts earnings on the back of mobile, wireless, cloudservices

Bose revamps iconic QuietComfort headphones

Top four consolidate leadership in cloud services market

Spark recalls power back-up kit for wireless landline phones

Read this article:

Machine learning to grow innovation as smart personal device market peaks - IT Brief New Zealand

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game – Analytics India Magazine

Posted: at 11:46 pm


without comments

One of the most used machine learning (ML) algorithms of this year, reinforcement learning (RL) has been utilised to solve complex decision-making problems. In the present scenario, most of the researches are focussed on using RL algorithms which helps in improving the performance of the AI model in some controlled environment.

Ubisofts prototyping space, Ubisoft La Forge has been doing a lot of advancements in its AI space. The goal of this prototyping space is to bridge the gap between the theoretical academic work and the practical applications of AI in videogames as well as in the real world. In one of our articles, we discussed how Ubisoft is mainstreaming machine learning into game development. Recently, researchers from the La Forge project at Ubisoft Montreal proposed a hybrid AI algorithm known as Hybrid SAC, which is able to handle actions in a video game.

Most reinforcement learning research papers focus on environments where the agents actions are either discrete or continuous. However, when training an agent to play a video game, it is common to encounter situations where actions have both discrete and continuous components. For instance, when wanting the agent to control systems that have both discrete and continuous components, like driving a car by combining steering and acceleration (both continuous) with the usage of the hand brake (a discrete binary action).

This is where Hybrid SAC comes into play. Through this model, the researchers tried to sort out the common challenges in video game development techniques. The contribution consists of a different set of constraints which is mainly geared towards industry practitioners.

The approach in this research is based on Soft Actor-Critic which is designed for continuous action problems. Soft Actor-Critic (SAC) is a model-free algorithm which was originally proposed for continuous control tasks, however, the actions which are mostly encountered in video games are both continuous as well as discrete.

In order to deal with a mix of discrete and continuous action components, the researchers converted part of SACs continuous output into discrete actions. Thus the researchers further explored this approach and extended it to a hybrid form with both continuous and discrete actions. The researchers also introduced Hybrid SAC which is an extension to the SAC algorithm that can handle discrete, continuous and mixed actions discrete-continuous.

The researchers trained a vehicle in a Ubisoft game by using the proposed Hybrid SAC model with two continuous actions (acceleration and steering) and one binary discrete action (hand brake). The objective of the car is to follow a given path as fast as possible, and in this case, the discrete hand brake action plays a key role in staying on the road at such a high speed.

Hybrid SAC exhibits competitive performance with the state-of-the-art on parameterised actions benchmarks. The researchers showed that this hybrid model can be successfully applied to train a car on a high-speed driving task in a commercial video game, also, demonstrating the practical usefulness of such an algorithm for the video game industry.

While working with the mixed discrete-continuous actions, the researchers have gained several experiences and shared them as a piece of advice to obtain an appropriate representation for a given task.They are mentioned below

comments

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: ambika.choudhury@analyticsindiamag.com

See the rest here:

This AI Agent Uses Reinforcement Learning To Self-Drive In A Video Game - Analytics India Magazine

Written by admin

December 31st, 2019 at 11:46 pm

Posted in Machine Learning

10 Machine Learning Techniques and their Definitions – AiThority

Posted: December 9, 2019 at 7:52 pm


without comments

When one technology replaces another, its not easy to accurately ascertain how the new technology would impact our lives. With so much buzz around the modern applications of Artificial Intelligence, Machine Learning, and Data Science, it becomes difficult to track the developments of these technologies. Machine Learning, in particular, has undergone a remarkable evolution in recent years. Many Machine Learning (ML) techniques have come in the foreground recently, most of which go beyond the traditionally simple classifications of this highly scientific Data Science specialization.

Read More: Beyond RPA And Cognitive Document Automation: Intelligent Automation At Scale

Lets point out the top ML techniques that the industry leaders and investors are keenly following, their definition, and commercial application.

Perceptual Learning is the scientific technique of enabling AI ML algorithms with better perception abilities to categorize and differentiate spatial and temporal patterns in the physical world.

For humans, Perceptual Learning is mostly instinctive and condition-driven. It means humans learn perceptual skills without actual awareness. In the case of machines, these learning skills are mapped implicitly using sensors, mechanoreceptors, and connected intelligent machines.

Most AI ML engineering companies boast of developing and delivering AI ML models that run on an automated platform. They openly challenge the presence and need for a Data Scientist in the Engineering process.

Automated Machine Learning (AutoML) is defined as the fully automating the entire process of Machine Learning model development right up till the process of its application.

AutoML enables companies to leverage AI ML models in an automated environment without truly seeking the involvement and supervision of Data Scientists, AI Engineers or Analysts.

Google, Baidu, IBM, Amazon, H2O, and a bunch of other technology-innovation companies already offer a host of AutoML environment for many commercial applications. These applications have swept into every possible business in every industry, including in Healthcare, Manufacturing, FinTech, Marketing and Sales, Retail, Sports and more.

Bayesian Machine Learning is a unique specialization within AI ML projects that leverage statistical models along with Data Science techniques. Any ML technique that uses the Bayes Theorem and Bayesian statistical modeling approach in Machine Learning fall under the purview of Bayesian Machine Learning.

The contemporary applications of Bayesian ML involves the use of open-source coding platform Python. Unique applications include

A good ML program would be expected to perpetually learn to perform a set of complex tasks. This learning mechanism is understood from the specialized branch of AI ML techniques, called Meta-Learning.

The industry-wide definition for Meta-Learning is the ability to learn and generalize AI into different real-world scenarios encountered during the ML training time, using specific volume and variety of data.

Meta-Learning techniques can be further differentiated into three categories

In each of these categories, there is a unique learner, meta-learner, and vectors with labels that match Data-Time-Spatial vectors into a set of networking processes to weigh real-world scenarios labeled with context and inferences.

All the recent Image Processing and Voice Search techniques use the Meta-Learning techniques for their outcomes.

Adversarial ML is one of the fastest-growing and most sophisticated of all ML techniques. It is defined as the ML technique adopted to test and validate the effectiveness of any Machine Learning program in an adverse situation.

As the name suggests, its the antagonistic principle of genuine AI, but used nonetheless to test the veracity of any ML technique when it encounters a unique, adverse situation. It is mostly used to fool an ML model into doubting its own results, thereby leading to a malfunction.

Most ML models are capable of generating answer for one single parameter. But, can it be used to answer for x (unknown or variable) parameter. Thats where the Causal Inference ML techniques comes into play.

Most AI ML courses online are teaching Causal inference as a core ML modeling technique. Causal inference ML technique is defined as the causal reasoning process to draw a unique conclusion based on the impact variables and conditions have on the outcome. This technique is further categorized into Observational ML and Interventional ML, depending on what is driving the Causal Inference algorithm.

Also commercially popularized as Explainable AI (X AI), this technique involves the use of neural networking and interpretation models to make ML structures more easily understood by humans.

Deep Learning Interpretability is defined as the ML specialization to remove black boxes in AI models, providing decision-makers and data officers to understand data modeling structures and legally permit the use of AI ML for general purposes.

The ML technique may use one or more of these techniques for Deep Learning Interpretation.

Any data can be accurately plotted using graphs. In Machine Learning techniques, a graph is a data structure consisting of two components, Vertices (or nodes) and Edges.

Graph ML networks is a specialized ML technique used to connect problems with edges and graphs. Graph Neural Networks (NNs) give rise to the category of Connected NNs (CNSS) and AI NNs (ANN).

There are at least 50 more ML techniques that could be learned and deployed using various NN models and systems. Click here to know of the leading ML companies that are constantly transforming Data Science applications with AI ML techniques.

(To share your insights about ML techniques and commercial applications, please write to us at info@aithority.com)

Read more from the original source:

10 Machine Learning Techniques and their Definitions - AiThority

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Machine Learning

Managing Big Data in Real-Time with AI and Machine Learning – Database Trends and Applications

Posted: at 7:52 pm


without comments

Dec 9, 2019

Processing big data in real-time for artificial intelligence, machine learning, and the Internet of Things poses significant infrastructure challenges.

Whether it is for autonomous vehicles, connected devices, or scientific research, legacy NoSQL solutions often struggle at hyperscale. Theyve been built on top of existing RDBMs and tend to strain when looking to analyze and act upon data at hyperscale - petabytes and beyond.

DBTA recently held a webinar featuring Theresa Melvin, chief architect of AI-driven big data solutions, HPE, and Noel Yuhanna, principal analyst serving enterprise architecture professionals, Forrester, who discussed trends in what enterprises are doing to manage big data in real-time.

Data is the new currency and it is driving todays business strategy to fuel innovation and growth, Yuhanna said.

According to a Forrester survey, the top data challenges are data governance, data silos, and data growth, he explained.

More than 35% of enterprises have failed to get value from big data projects largely because of skills, budget, complexity and strategy. Most organizations are dealing with growing multi-format data volume thats in multiple repositories -relational, NoSQL, Hadoop, data lake..

The need has grown for real-time and agile data requirements, he explained. There are too many data silos multiple repositories, cloud sources.

There is a lack of visibility into data across personas -- developer, data scientist, data engineers, data architects, security etc..Traditional data platforms have failed to support new business requirements such as data warehouse, relational DBMS, and ETL tools.

Its all about the customer and its critical for organizations to have a platform to succeed, Yuhanna said. Customers prefer personalization. Companies are still early on their AI journey but they believe it will improve efficiency and effectiveness.

AI and machine learning can hyper-personalize customer experience with targeted offers, he explained. It can also prevent line shutdowns by predicting machine failures.

AI is not one technology. It is comprised of one or more building block technologies. According to the Forrester survey, Yuhanna said AI/ML for data will help end-users and customers to support data intelligence to support new next-generation use cases such as customer personalization, fraud detection, advanced IoT analytics and rea-time data sharing and collaboration.

AI/ML as a platform feature will help support automation within the BI platform for data integration, data quality, security, governance, transformation, etc., minimizing human effort required. This helps deliver insights quicker in hours instead of days and months.

Melvin suggested using HPE Persistent Memory. The platform offers real-time analysis, real-time persist, a single source of truth, and a persistent record.

An archived on-demand replay of this webinar is available here.

See the article here:

Managing Big Data in Real-Time with AI and Machine Learning - Database Trends and Applications

Written by admin

December 9th, 2019 at 7:52 pm

Posted in Machine Learning


Page 33«..1020..31323334



matomo tracker