What we learned about AI and deep learning in 2022 – VentureBeat
Posted: December 29, 2022 at 12:20 am
Check out all the on-demand sessions from the Intelligent Security Summit here.
Its as good a time as any to discuss the implications of advances in artificial intelligence (AI). 2022 saw interesting progress in deep learning, especially in generative models. However, as the capabilities of deep learning models increase, so does the confusion surrounding them.
On the one hand, advanced models such as ChatGPT and DALL-E are displaying fascinating results and the impression of thinking and reasoning. On the other hand, they often make errors that prove they lack some of the basic elements of intelligence that humans have.
The science community is divided on what to make of these advances. At one end of the spectrum, some scientists have gone as far as saying that sophisticated models are sentient and should be attributed personhood. Others have suggested that current deep learning approaches will lead to artificial general intelligence (AGI). Meanwhile, some scientists have studied the failures of current models and are pointing out that although useful, even the most advanced deep learning systems suffer from the same kind of failures that earlier models had.
It was against this background that the online AGI Debate #3 was held on Friday, hosted by Montreal AI president Vincent Boucher and AI researcher Gary Marcus. The conference, which featured talks by scientists from different backgrounds, discussed lessons from cognitive science and neuroscience, the path to commonsense reasoning in AI, and suggestions for architectures that can help take the next step in AI.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
Deep learning approaches can provide useful tools in many domains, said linguist and cognitive scientist Noam Chomsky. Some of these applications, such as automatic transcription and text autocomplete have become tools we rely on every day.
But beyond utility, what do we learn from these approaches about cognition, thinking, in particular language? Chomsky said. [Deep learning] systems make no distinction between possible and impossible languages. The more the systems are improved the deeper the failure becomes. They will do even better with impossible languages and other systems.
This flaw is evident in systems like ChatGPT, which can produce text that is grammatically correct and consistent but logically and factually flawed. Presenters at the conference provided numerous examples of such flaws, such as large language models not being able to sort sentences based on length, making grave errors on simple logical problems, and making false and inconsistent statements.
According to Chomsky, the current approaches for advancing deep learning systems, which rely on adding training data, creating larger models, and using clever programming, will only exacerbate the mistakes that these systems make.
In short, theyre telling us nothing about language and thought, about cognition generally, or about what it is to be human or any other flights of fantasy in contemporary discussion, Chomsky said.
Marcus said that a decade after the 2012 deep learning revolution, considerable progress has been made, but some issues remain.
He laid out four key aspects of cognition that are missing from deep learning systems:
Deep neural networks will continue to make mistakes in adversarial and edge cases, said Yejin Choi, computer science professor at the University of Washington.
The real problem were facing today is that we simply do not know the depth or breadth of these adversarial or edge cases, Choi said. My haunch is that this is going to be a real challenge that a lot of people might be underestimating. The true difference between human intelligence and current AI is still so vast.
Choi said that the gap between human and artificial intelligence is caused by lack of common sense, which she described as the dark matter of language and intelligence and the unspoken rules of how the world works that influence the way people use and interpret language.
According to Choi, common sense is trivial for humans and hard for machines because obvious things are never spoken, there are endless exceptions to every rule, and there is no universal truth in commonsense matters. Its ambiguous, messy stuff, she said.
AI researcher and neuroscientist, Dileep George, emphasized the importance of mental simulation for common sense reasoning via language. Knowledge for commonsense reasoning is acquired through sensory experience, George said, and this knowledge is stored in the perceptual and motor system. We use language to probe this model and trigger simulations in the mind.
You can think of our perceptual and conceptual system as the simulator, which is acquired through our sensorimotor experience. Language is something that controls the simulation, he said.
George also questioned some of the current ideas for creating world models for AI systems. In most of these blueprints for world models, perception is a preprocessor that creates a representation on which the world model is built.
That is unlikely to work because many details of perception need to be accessed on the fly for you to be able to run the simulation, he said. Perception has to be bidirectional and has to use feedback connections to access the simulations.
While many scientists agree on the shortcomings of current AI systems, they differ on the road forward.
David Ferrucci, founder of Elemental Cognition and a former member of IBM Watson, said that we cant fulfill our vision for AI if we cant get machines to explain why they are producing the output theyre producing.
Ferruccis company is working on an AI system that integrates different modules. Machine learning models generate hypotheses based on their observations and project them onto an explicit knowledge module that ranks them. The best hypotheses are then processed by an automated reasoning module. This architecture can explain its inferences and its causal model, two features that are missing in current AI systems. The system develops its knowledge and causal models from classic deep learning approaches and interactions with humans.
AI scientist Ben Goertzel stressed that the deep neural net systems that are currently dominating the current commercial AI landscape will not make much progress toward building real AGI systems.
Goertzel, who is best known for coining the term AGI, said that enhancing current models such as GPT-3 with fact-checkers will not fix the problems that deep learning faces and will not make them capable of generalization like the human mind.
Engineering true, open-ended intelligence with general intelligence is totally possible, and there are several routes to get there, Goertzel said.
He proposed three solutions, including doing a real brain simulation; making a complex self-organizing system that is quite different from the brain; or creating a hybrid cognitive architecture that self-organizes knowledge in a self-reprogramming, self-rewriting knowledge graph controlling an embodied agent. His current initiative, the OpenCog Hyperon project, is exploring the latter approach.
Francesca Rossi, IBM fellow and AI Ethics Global Leader at the Thomas J. Watson Research Center, proposed an AI architecture that takes inspiration from cognitive science and the Thinking Fast and Slow Framework of Daniel Kahneman.
The architecture, named SlOw and Fast AI (SOFAI), uses a multi-agent approach composed of fast and slow solvers. Fast solvers rely on machine learning to solve problems. Slow solvers are more symbolic and attentive and computationally complex. There is also a metacognitive module that acts as an arbiter and decides which agent will solve the problem.Like the human brain, if the fast solver cant address a novel situation, the metacognitive module passes it on to the slow solver. This loop then retrains the fast solver to gradually learn to address these situations.
This is an architecture that is supposed to work for both autonomous systems and for supporting human decisions, Rossi said.
Jrgen Schmidhuber, scientific director of The Swiss AI Lab IDSIA and one of the pioneers of modern deep learning techniques, said that many of the problems raised about current AI systems have been addressed in systems and architectures introduced in the past decades. Schmidhuber suggested that solving these problems is a matter of computational cost and that in the future, we will be able to create deep learning systems that can do meta-learning and find new and better learning algorithms.
Jeff Clune, associate professor of computer science at the University of British Columbia, presented the idea of AI-generating algorithms.
The idea is to learn as much as possible, to bootstrap from very simple beginnings all the way through to AGI, Clune said.
Such a system has an outer loop that searches through the space of possible AI agents and ultimately produces something that is very sample-efficient and very general. The evidence that this is possible is the very expensive and inefficient algorithm of Darwinian evolution that ultimately produced the human mind, Clune said.
Clune has been discussing AI-generating algorithms since 2019, which he believes rests on three key pillars: Meta-learning architectures, meta-learning algorithms, and effective means to generate environments and data. Basically, this is a system that can constantly create, evaluate and upgrade new learning environments and algorithms.
At the AGI debate, Clune added a fourth pillar, which he described as leveraging human data.
If you watch years and years of video on agents doing that task and pretrain on that, then you can go on to learn very very difficult tasks, Clune said. Thats a really big accelerant to these efforts to try to learn as much as possible.
Learning from human-generated data is what has allowed GPT, CLIP and DALL-E to find efficient ways to generate impressive results. AI sees further by standing on the shoulders of giant datasets, Clune said.
Clune finished by predicting a 30% chance of having AGI by 2030. He also said that current deep learning paradigms with some key enhancements will be enough to achieve AGI.
Clune warned, I dont think were ready as a scientific community and as a society for AGI arriving that soon, and we need to start planning for this as soon as possible. We need to start planning now.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
See original here:
What we learned about AI and deep learning in 2022 - VentureBeat
Top 10 AI and machine learning stories of 2022 – Healthcare IT News
Posted: at 12:20 am
Healthcare's comfort level with artificial intelligence and machine learning models and skill at deploying them across myriad clinical, financial and operational use cases continued to increase in 2023.
More and more evidence shows that training AI algorithms on a variety of datasets can improve decision support, boost population health management, streamline administrative tasks, enable cost efficiencies and even improve outcomes.
But there's still a lot work to be done to ensure accurate, reliable, understandable and evidence-based results that ensure patient safety and account for health equity.
Theres no doubt that AIs application in healthcare has gone beyond "real in 2019 tosignificant investmentby providers and payers last year. This year, weve reported on deeper industry discussions focused on trust and best practices. Weve featured industry perspectives on the values ofdeep learning and neural networksand how to clear data hurdles along with announcements of successful studies and of course, new healthcare AI technology partnerships. Here are Healthcare IT News most-read AI stories of 2022.
How AI bias happens and how to eliminate it. Though posted about 30 days before the close of 2021, readers flocked to read the advice of Stanford cardiologist Dr. Sanjiv M. Narayan, co-director of the Stanford Arrhythmia Center, director of its Atrial Fibrillation Program and professor of medicine at Stanford University School of Medicine. Narayan discussed multiple approaches to eliminate bias in AI, including training multiple versions of algorithms, adding multiple datasets to AI and updating a machines training datasets over time. He cautioned that algorithmic hygiene strategies are not foolproof, and that bias is more likely to compound when integrating complex systems.
Developing trust in healthcare AI, step by step. While usage of AI in healthcare has increased, providers have been concerned about how much they should trust machine learning in clinical settings. A Chilmark Research report by analyst Dr. Jody Ranck indicated that, based on a review of hundreds of first-year COVID-19 pandemic algorithms, numerous instances of AI could not be validated. Ranck proposed strategies to increase evidence-based AI development.
Sentient AI? Convincing you its human is just part of LaMDAs job. In this guest post, which was published after amainstream mediafeeding frenzy about an ostensibly "sentient" machine learning application. Dr. Chirag Shah, associate professor at the Information School at the University of Washington, explains how Googles LaMDA chatbot, which easily passed the Turing Test, does not prove the presence of self-aware consciousness. LaMDA proves only that it can create the illusion of possessing self-awareness which is exactly what it was designed to do.
Duke, Mayo Clinic, others launch innovative AI collaboration. Artificial intelligence researchers and technology leaders from Duke, Mayo Clinic, University of California Berkeley and others unveiled a new Health AI Partnership at a virtual HIMSS learning event just before the close of 2021. By developing an online curriculum to help educate IT leaders and working with stakeholders, the collaborators are aiming to develop a standardized, evidence-driven process for AI deployments in healthcare.
The intersection of remote patient monitoring and AI. Robin Farmanfarmaian, author of "How AI Can Democratize Healthcare: The Rise in Digital Care" and four other books, discussed how AI is impacting remote patient monitoring today and how it can democratize healthcare. "RPM has the ability to collect clinical-grade data when people are in all stages of health and at all ages," she said."When collected continuously in machine-readable databases, once RPM is more fully adopted, those databases have the ability to dwarf EHR data from a hospital or health system."
Mayo launches AI startup program, with assists from Epic and Google. In March, The Mayo Clinic launched a 20-week startup program to give early-stage health tech AI companies a boost. The clinic's technology, medical and business experts, along with thought leaders from Google and Epic, were to provide the cohort with expertise to help the startups delineate AI model requirements.
AI study finds 50% of patient notes duplicated. University of Pennsylvania Perelman School of Medicine in Philadelphia researchers used natural language processing to find the rate of note duplication, as well as the rate of duplication year over year, across the records of 1.96 million unique patients from 2015 to 2020. "Duplicate text casts doubt on the veracity of all information in the medical record, making it difficult to find and verify information in day-to-day clinical work," according to their JAMA report published in September.
AWS, GE leaders talk hurdles to data-sharing, AI implementation. In a fireside chat at HIMSS22, Amazon Web Services' Dr. Taha Kass-Hout and GE Healthcare's Vignesh Shetty discussed the challenges of AI and the opportunities for making better-connected decisions.
How AI and machine learning can predict illness and boost health equity. In a recent Q&A, Brett Furst, president of HHS Tech Group, discussed how leveraging the COVID-19 Research Database one of the world's most comprehensive cross-linked data sets can establish cause-effect relationships between multiple variables. When machine learning determines how multiple variables interact, it can reliably predict health outcomes.
CommonSpirit Health gains huge efficiencies with AI-infused OR scheduling tool. This case study, featuring Brian Dawson, system vice president of perioperative services at CommonSpirit, showed how the health system implemented an AI utilization tool that would improve operating room efficiencies across its 350 hospitals. "Healthcare providers across the globe have had to do more with less, and it has led to increased burnout, staff shortages, patient dissatisfaction and scarce resources," said Dawson.
Andrea Fox is senior editor of Healthcare IT News.Email:afox@himss.orgHealthcare IT News is a HIMSS publication.
Read this article:
Top 10 AI and machine learning stories of 2022 - Healthcare IT News
AI and Machine Learning Ad Technology Will Dominate The 2023 … – stupidDOPE.com
Posted: at 12:20 am
Artificial intelligence (AI) and machine learning have already made significant inroads into the media landscape, and they are poised to become even more dominant in the coming years. In fact, many experts predict that by 2023, AI and machine learning will be at the forefront of the media industry, driving innovation and shaping the way that we consume and interact with content.
There are a number of reasons why AI and machine learning will likely dominate the media landscape in 2023 and beyond. Here are a few key factors to consider:
In conclusion, AI and machine learning are poised to dominate the media landscape in 2023 and beyond. These technologies are already driving innovation, improving efficiency, and enhancing the user experience, and they are likely to become even more important in the coming years. As a result, media companies that are able to effectively leverage these technologies will be well positioned to succeed in the rapidly changing media landscape of the future.
Looking for help in this area? Reach out to AHOD.
See the original post:
AI and Machine Learning Ad Technology Will Dominate The 2023 ... - stupidDOPE.com
Machine learning and hypothesis driven optimization of bull semen … – Nature.com
Posted: at 12:20 am
Ugur, M. R. et al. Advances in cryopreservation of bull sperm. Front. Vet. Sci. 6, 115 (2019).
Article Google Scholar
Benson, J. D., Woods, E. J., Walters, E. M. & Critser, J. K. The cryobiology of spermatozoa. Theriogenology 78, 16821699 (2012).
Article CAS Google Scholar
Amirat, L. et al. Bull semen in vitro fertility after cryopreservation using egg yolk LDL: A comparison with Optidyl, a commercial egg yolk extender. Theriogenology 61, 895907 (2004).
Article CAS Google Scholar
Yoon, S. J., Kwon, W. S., Rahman, M. S., Lee, J. S. & Pang, M. G. A novel approach to identifying physical markers of cryo-damage in bull spermatozoa. PLoS ONE 10, 1 (2015).
Article Google Scholar
Medeiros, C. M. O., Forell, F., Oliveira, A. T. D. & Rodrigues, J. L. Current status of sperm cryopreservation: Why isnt it better?. Theriogenology 1, 53275344 (2002).
Google Scholar
Pojprasath, T., Lohachit, C., Techakumphu, M., Stout, T. & Tharasanit, T. Improved cryopreservability of stallion sperm using a sorbitol-based freezing extender. Theriogenology 75, 17421749 (2011).
Article CAS Google Scholar
Lonergan, P. Historical and futuristic developments in bovine semen technology. Animal 12, s4s18 (2018).
Article CAS Google Scholar
Mousavi, S. M. et al. Comparison of two different antioxidants in a nano lecithin-based extender for bull sperm cryopreservation. Anim. Reprod. Sci. 209, 1 (2019).
Article Google Scholar
Murphy, E. M. et al. Influence of bull age, ejaculate number, and season of collection on semen production and sperm motility parameters in holstein friesian bulls in a commercial artificial insemination centre. J. Anim. Sci. 96, 24082418 (2018).
Article Google Scholar
Thurston, L. M., Watson, P. F. & Holt, W. V. Semen cryopreservation: A genetic explanation for species and individual variation?. Cryo-Letters 23, 255262 (2002).
Google Scholar
Sieme, H., Oldenhof, H. & Wolkers, W. F. Mode of action of cryoprotectants for sperm preservation. Anim. Reprod. Sci. 169, 25 (2016).
Article CAS Google Scholar
Ashrafi, I., Kohram, H. & Ardabili, F. F. Antioxidative effects of melatonin on kinetics, microscopic and oxidative parameters of cryopreserved bull spermatozoa. Anim. Reprod. Sci. 139, 2530 (2013).
Article CAS Google Scholar
ChaithraShree, A. R. et al. Effect of melatonin on bovine sperm characteristics and ultrastructure changes following cryopreservation. Vet. Med. Sci. 6, 177186 (2020).
Article CAS Google Scholar
Barbas, J. P. & Mascarenhas, R. D. Cryopreservation of domestic animal sperm cells. Cell Tissue Bank. 10, 4962 (2009).
Article CAS Google Scholar
Galbraith, S. C., Bhatia, H., Liu, H. & Yoon, S. Media formulation optimization: current and future opportunities. Curr. Opin. Chem. Eng. 22, 4247 (2018).
Article Google Scholar
Grzesik, P. & Warth, S. C. One-time optimization of advanced T cell culture media using a machine learning pipeline. Front. Bioeng. Biotechnol. 9, 1 (2021).
Article Google Scholar
Bedbrook, C. N., Yang, K. K., Rice, A. J., Gradinaru, V. & Arnold, F. H. Machine learning to design integral membrane channelrhodopsins for efficient eukaryotic expression and plasma membrane localization. PLoS Comput. Biol. 13, 1 (2017).
Article Google Scholar
Pollock, K., Budenske, J. W., McKenna, D. H., Dosa, P. I. & Hubel, A. Algorithm-driven optimization of cryopreservation protocols for transfusion model cell types including Jurkat cells and mesenchymal stem cells. J. Tissue Eng. Regen. Med. 11, 28062815 (2017).
Article CAS Google Scholar
Pi, C. H., Dosa, P. I. & Hubel, A. Differential evolution for the optimization of DMSO-free cryoprotectants: Influence of control parameters. J. Biomech. Eng. 142, 110 (2020).
Article Google Scholar
Li, R., Hornberger, K., Dutton, J. R. & Hubel, A. Cryopreservation of human iPS cell aggregates in a DMSO-free solutionAn optimization and comparative study. Front. Bioeng. Biotechnol. 8, 1 (2020).
Article Google Scholar
Desai, K. M., Survase, S. A., Saudagar, P. S., Lele, S. S. & Singhal, R. S. Comparison of artificial neural network (ANN) and response surface methodology (RSM) in fermentation media optimization: Case study of fermentative production of scleroglucan. Biochem. Eng. J. 41, 266273 (2008).
Article CAS Google Scholar
Ba, D. & Boyac, H. Modeling and optimization II: Comparison of estimation capabilities of response surface methodology with artificial neural networks in a biochemical reaction. J. Food Eng. 78, 846854 (2007).
Article Google Scholar
I. Goodfellow, Y. Bengio, A. C. Deep Learning (MIT Press, 2016).
Myers, R. H. & Montgomery, D. C. Response surface methodology: Process and product in optimization using designed experiments (1995).
Mayer, D. T. & Lasley, J. F. The factor in egg yolk affecting the resistance, storage potentialities, and fertilizing capacity of mammalian spermatozoa. J. Anim. Sci. 4, 261269 (1945).
Article Google Scholar
Pace, M. M. & Graham, E. F. Components in egg yolk which protect bovine spermatozoa during freezing. J. Anim. Sci. 39, 11441149 (1974).
Article CAS Google Scholar
Liu, Z., Foote, R. H. & Brockett, C. C. Survival of bull sperm frozen at different rates in media varying in osmolarity. Cryobiology 37, 219230 (1998).
Article CAS Google Scholar
Purdy, P. H. & Graham, J. K. Effect of cholesterol-loaded cyclodextrin on the cryosurvival of bull sperm. Cryobiology 48, 3645 (2004).
Article CAS Google Scholar
Raheja, N., Grewal, S., Sharma, N., Kumar, N. & Choudhary, S. A review on semen extenders and additives used in cattle and buffalo bull semen preservation. J. Entomol. Zool. Stud. 6, 239245 (2018).
Google Scholar
Forero-Gonzalez, R. A. et al. Effects of bovine sperm cryopreservation using different freezing techniques and cryoprotective agents on plasma, acrosomal and mitochondrial membranes. Andrologia 44, 154159 (2012).
Article Google Scholar
El-Sheshtawy, R. I., Sisy, G. A. & El-Nattat, W. S. Effects of different concentrations of sucrose or trehalose on the post-thawing quality of cattle bull semen. Asian Pac. J. Reprod. 4, 2631 (2015).
Article Google Scholar
Woelders, H., Matthijs, A. & Engel, B. Effects of trehalose and sucrose, osmolality of the freezing medium, and cooling rate on viability and intactness of bull sperm after freezing and thawing. Cryobiology https://doi.org/10.1006/cryo.1997.2028 (1997).
Ahmad, E. & Aksoy, M. Trehalose as a cryoprotective agent for the sperm cells: A mini review. Anim. Heal. Prod. Hyg. 1, 123129 (2012).
Google Scholar
Foote, R. H. & Kaprotht, M. T. Large batch freezing of bull semen: Effect of time of freezing and fructose on fertility. J. Dairy Sci. 85, 453456 (2002).
Article CAS Google Scholar
Purdy, P. H. A review on goat sperm cryopreservation. Small Rumin. Res. 63, 215225 (2006).
Article Google Scholar
Fowler, A. & Toner, M. Cryo-injury and biopreservation. Ann. N. Y. Acad. Sci. 1066, 119135 (2006).
Article ADS Google Scholar
Hardeland, R., Reiter, R. J., Poeggeler, B. & Tan, D. The significance of the metabolism of the neurohormone melatonin: Antioxidative protection and formation of bioactive substances. Neurosci. Biobehav. Rev. 17, 347357 (1993).
Article CAS Google Scholar
Li, C. et al. Detection of nerve growth factor (NGF) and its specific receptor (TrkA) in ejaculated bovine sperm, and the effects of NGF on sperm function. Theriogenology 74, 16151622 (2010).
Article CAS Google Scholar
Saeednia, S., Shabani Nashtaei, M., Bahadoran, H., Aleyasin, A. & Amidi, F. Effect of nerve growth factor on sperm quality in asthenozoosprmic men during cryopreservation. Reprod. Biol. Endocrinol. 14, 18 (2016).
Article Google Scholar
Storn, R. & Price, K. Differential evolutiona simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341359 (1997).
Article MathSciNet MATH Google Scholar
Awad, M. M. Effect of some permeating cryoprotectants on CASA motility results in cryopreserved bull spermatozoa. Anim. Reprod. Sci. 123, 157162 (2011).
Article CAS Google Scholar
Gororo, E., Makuza, S. M., Chidzwondo, F. & Chatiza, F. P. Variation in sperm cryosurvival is not modified by replacing the cryoprotectant glycerol with ethylene glycol in bulls. Reprod. Domest. Anim. 55, 12101218 (2020).
Article CAS Google Scholar
Rota, A., Milani, C., Cabianca, G. & Martini, M. Comparison between glycerol and ethylene glycol for dog semen cryopreservation. Theriogenology 65, 18481858 (2006).
Article CAS Google Scholar
Mehta, V., Pareek, P., Kumar, A. & Purohit, G. N. Comparative effect of different concentrations of glycerol and ethylene glycol and temperature on cryopreservation of ram semen. Res. J. Vet. Pract. 8, 1 (2020).
Article Google Scholar
Swelum, A. A., Mansour, H. A., Elsayed, A. A. & Amer, H. A. Comparing ethylene glycol with glycerol for cryopreservation of buffalo bull semen in egg-yolk containing extenders. Theriogenology 76, 833842 (2011).
Article CAS Google Scholar
Kowsar, R., Ronasi, S., Sadeghi, N., Sadeghi, K. & Miyamoto, A. Epidermal growth factor alleviates the negative impact of urea on frozen-thawed bovine sperm, but the subsequent developmental competence is compromised. Sci. Rep. 11, 113 (2021).
Article Google Scholar
Miller, W. J. & Vandemark, N. L. The influence of glycerol level, various temperature aspects and certain other factors on the survival of bull spermatozoa at sub-zero temperatures. J. Dairy Sci. 37, 4551 (1954).
Article CAS Google Scholar
Bhat, M. H., Blondin, P., Vincent, P. & Benson, J. D. Low concentrations of 3-O-methylglucose improve post thaw recovery in cryopreserved bovine spermatozoa. Cryobiology 95, 1519 (2020).
Article CAS Google Scholar
R Core Team. R: A Language and Environment for Statistical Computing. (2021).
Ogle, D. H., Wheeler, P. & Dinno, A. FSA: Fisheries Stock Analysis. (2020).
Wellek, S. & Ziegler, P. EQUIVNONINF: Testing for Equivalence and Noninferiority. (2021).
Wellek, S. Testing Statistical Hypotheses of Equivalence. (Chapman and Hall/CRC, 2010).
Tu, F. Experimental and Computational Approaches to Optimizing Bovine Gamete Cryopreservation. (University Of Saskatchewan, 2021).
Go here to read the rest:
Machine learning and hypothesis driven optimization of bull semen ... - Nature.com
The Economic Impact of Transitioning to Hybrid Cloud for Analytics … – RTInsights
Posted: at 12:20 am
Analytics and ML performance and speed to results can be vastly improved when using a hybrid cloud that incorporates a modern database.
Businesses today are making analytics and machine learning a central part of their operations to help with everything, including improving efficiencies, hyper-personalizing customer services, and more. The dynamic nature of the workloads leads many businesses to move to hybrid cloud.
Hybrid cloud offers great scalability, cost savings, and the ability to move workloads to platforms that are optimized for the compute, storage, and data management needs of analytics and ML-intensive operations. In particular, performance and speed to results can be vastly improved when using a modern database that offerscapabilities, tools, and supportthat are more advanced than many on-premises technologies.
That is an area where working with a company like Vertica, a Micro Focus line of business, comes into play. The Vertica SQL database and in-database machine learning solutions support the entire predictive analytics process with massively parallel processing and a familiar SQL interface.
Vertica allows wide-scale use of analytics and ML throughout a business. That, in turn, helps deliver significant economic value to a business. Unfortunately, many companies need help with their particular move to hybrid.
Addressing geographically incurred latency
Another issue is how to make use of data and databases that already exists but may not be geographically close to cloud resources. The issue here is that when moving workloads to the cloud, latency due to geographic separation may be a problem. In particular, latency becomes especially important when analytics and ML routines are run on highly-optimized platforms such as Verticas.
To address this issue, the Vertica platform leverages technology from Vcinity. Vcinity uses patented technology to enable the Vertica platform to process geographically dispersed data across hybrid cloud environments as if Vertica and its data are co-located. It delivers LAN-like performance regardless of distance/latency and enables applications to access data, where and when it is created.
Verticas unified analytics platform and combined with other offerings and capabilities delivered via partnerships have use in a broad range of applications. In all cases, the business reaps significant benefits. Some examples include:
B2C marketing: Netcore is a global Martech product company that helps B2C brands create digital customer experiences with a range of products that help in acquisition, engagement, and retention. Netcores clients use its solutions to plan, execute, and monitor marketing campaigns across different channels such as Email, SMS, App, WhatsApp, and so on. Given limited budgets, the key ROI challenge for clients is to target the right customers, at the right time, on the right channels, and with the right message to maximize response rates and conversions.
To assist its clients, Netcore created Ramanan AI platform that analyzes huge datasets of historical and recent customer behavior to deliver smarter customer segmentation, improved targeting, and sophisticated predictive modeling.
As clients rapidly adopted Raman, Netcore was able to maintain the analytics performance customers expected and required using Verticas analytics platforms capabilities. In particular, the companys database was able to handle write- and read-intensive workloads in parallel without any lag or drop-in efficiency. In contrast, its existing implementations of MySQL and MongoDB were unable to handle this workload efficiently, leading to slower model refresh and analysis. One additional benefit of teaming with Vertica is that the solution easily scales without performance degradation.
Analytically-driven businesses: Verticateamed with H3C to deliver the benefits of cloud-native analytics to enterprise data centers. Specifically, Vertica and H3C integrated their offerings to help analytically-driven companies to elastically scale capacity and performance as data volumes grow and as machine learning initiatives become a business imperative all from within hybrid environments.
Vertica with H3C ONEStor enables businesses to adopt hybrid cloud for analytics wherever their data resides. Combining these two technologies offers fast analytics while simplifying data protection with easy backup and replication features.
The combined offering delivers high-performance analytics and machine learning with enterprise-grade object storage to enable organizations to address scalability needsfor now and in the future, leverage the separation of compute and storage architecture to address varying dynamic workload requirements, and simplify database operations.
eWallet app: Vertica is working with Vietnams largest e-wallet company, MoMo, to provide data analytics and machine learning for MoMos all-in-one super app, which is used for e-wallet and other FinTech services. The Vertica Unified Analytics Platform provides the company with actionable analytical insights.
MoMo needed a solution that offered the highest performance at extreme scale, the broadest analytical and machine learning capabilities, and complete support for multi-cloud and hybrid deployment to accommodate any future growth needs. Vertica met all of these requirements.
To put the requirements into perspective, as of May 2022, MoMo had 31 million users in Vietnam with 2 PB+ of data. It expects that data volume to double next year and the number of users to double over the next two years. Vertica provides MoMo with the flexibility to run its analytical workloads in the cloud, on-prem, as well as in hybrid environments, providing them with deployment flexibility regardless of where their future needs take them. Another factor when choosing Vertica was that the unified analytics platform combines the strengths of the data warehouse and the data lake ecosystem all in one ensuring high performance, scalable analytics, and machine learning, delivered at an overall lower total cost of ownership.
Hybrid cloud is well-suited to the dynamic demands of modern analytics and machine learning workloads. Increasingly, businesses are finding that one essential element of a hybrid environment for these workloads is a modern database.
The Vertica Unified Analytics Platform is just such a database. It is based on a massively scalable architecture with a broad set of analytical functions spanning event and time series, pattern matching, geospatial, and end-to-end in-database machine learning.
As such, Vertica enables many businesses to easily apply these powerful functions to the largest and most demanding analytical workloads, arming businesses and its customers with predictive business insights faster than other analytical databases or data warehouses in the market.
Critical to being part of a hybrid environment, Vertica provides its Unified Analytics Platform as SaaS on AWS, across all major public clouds, and on-premises data centers as a BYOL (bring your own license) model.
Learn more: https://www.vertica.com/what-is/hybrid-cloud/
Read the other blogs in this series:
Read the original post:
The Economic Impact of Transitioning to Hybrid Cloud for Analytics ... - RTInsights
How deep learning will ignite the metaverse in 2023 and beyond – VentureBeat
Posted: at 12:20 am
Check out all the on-demand sessions from the Intelligent Security Summit here.
The metaverse is becoming one of the hottest topics not only in technology but in the social and economic spheres. Tech giants and startups alike are already working on creating services for this new digital reality.
The metaverse is slowly evolving into a mainstream virtual world where you can work, learn, shop, be entertained and interact with others in ways never before possible. Gartner recently listed the metaverse as one of the top strategic technology trends for 2023, and predicts that by 2026, 25% of the population will spend at least one hour a day there for work, shopping, education, social activities and/or entertainment. That means organizations that use the metaverse effectively will be able to engage with both human and machine customers and create new revenue streams and markets.
However, most of these metaverse experiences will be able to continue to progress only with the use of deep learning (DL), as artificial intelligence (AI) and data science will be at the forefront of advancing this technology. For example, deep learning algorithms are making computers better at gesture recognition and eye tracking, thanks to the latest developments in computer vision that enable natural interactions and better understanding of emotion and body language. As such technologies are an essential aspect of the metaverses immersive interface, deep learning technologies now aim to further enhance realistic AI storytelling, creative partnering and machine understanding.
Currently, the digital realities being developed by different companies have their own attributes and integrated functionalities, and are at different development levels. Many of these multiverse platforms are expected to converge, and this junction is where AI and data science domains, such as deep learning, will be critical in taking users to a new stage in their metaverse journey. Success in these endeavors will be contingent upon understanding vital elements of the algorithmic models and their metrics.
GamesBeat Summit: Into the Metaverse 3
Join the GamesBeat community online, February 1-2, to examine the findings and emerging trends within the metaverse.
Deep learning-based software is already being integrated into virtual worlds; some examples include autonomously driving chatbots and other forms of natural language processing to ensure seamless interactions. For another example, in AR technology, deep learning-enabled AI is used in camera pose estimation, immersive rendering, real-world object detection and 3D object reconstruction, helping to guarantee the variety and usability of AR applications.
In October, Meta announced the launch of its universal speech translator (UST) project, which aims to create AI systems that enable real-time speech-to-speech translation across all languages regardless of the users language. In addition, the companys recent advances in unsupervised speech recognition (wav2vec-U) and unsupervised machine translation (mBART) will aid the future work of translating more spoken languages within the metaverse.
All such implementations require massive training data and modeling, now made possible through deep learning methodologies. In addition, AI-based Web3 technologies are now being called upon to automate smart contracts and decentralized ledgers, and create universal blockchain technologies to enable virtual transactions.
Deep learning provides much higher accuracy [and] almost no false positives, and if properly implemented, eliminates data noise (corruption), Jerrod Piker, competitive intelligence analyst at Deep Instinct, told VentureBeat.
Piker said that such implementations could aid in improving the metaverse, as a deep learning model is trained on all available data, providing incredible results on image recognition and natural language processing.
Meta has applied this in translating code from one programming language to another. Since the metaverse is a wide and open world, automatically translating code can have a huge impact on seamless integration between different platforms within the metaverse, he said.
Likewise, Scott Stephenson, CEO and cofounder at Deepgram, believes that deep neural networks are more capable and sophisticated than neural networks with fewer layers.
Companies have an interesting opportunity for their customers and community to interact with their brand(s) in new and exciting ways, and deep learning-based artificial intelligence plays a major role in facilitating those experiences, said Stephenson.
He explained that companies can now have AI brand representatives trained on a companys unique linguistic style and product documentation wander about the metaverse, evangelizing whatever product or service the company seeks to promote.
Rather than giving them dozens or even hundreds of lines of pre-scripted dialogue (like what youd experience in most video games these days), theres no reason why a metaverse platform shouldnt be running a generative text chatbot in the background to drive conversation and engagement, he said.
Despite its promise and potential, the metaverse continues to face user-based risks, such as data security. Deep learning-based AI models could be instrumental in overcoming those challenges when integrated with legacy tools.
Securing sensitive data that is being created, sent and shared across the metaverse requires more advanced techniques than past data security efforts. Deep learning can provide excellent results on this front with its uncanny ability to accurately identify content, said Piker. For instance, ongoing inspection for certain sensitive data to ensure it is not being leaked outside of its intended channel is extremely important, and deep learning is unmatched in correctly and efficiently identifying digital content of all kinds, with a far superior false positive rate vs. other machine learning models.
Scott Likens, innovation and trust technology leader at PwC, said that many brands have started to see the metaverses actual business value as deep learning and AI converge with VR to provide a much deeper experience for the metaverse in the future.
The generation of assets in the metaverse now benefits from AI, as there is currently a lack of content and digital assets to fill the metaverse. In addition, with the advances in data collection through IoT, we can feed the data-hungry deep learning models to create lifelike yet synthetic worlds that are being used to help drive business strategy and more at a pace we cant match in the current workforce, said Likens.
Deep learning technologies are going to be highly important in terms of automation, says Patrik Wilkens, vice president of operations at TheSoul Publishing, whose universe of well-known channels includes 5-Minute Crafts, Bright Side and 123 GO!.
Progress that used to take hours and hours of human effort is now attainable with incredible efficiency. As tech companies and content creators utilize the best tech, incorporating deep learning into their processes, the manpower that was previously used to make things work can now be used on other things. This is especially important for creative domains, Wilkens told VentureBeat.
Wilkens further explained that his company, TheSoul, is currently utilizing deep learning-based algorithms for several metaverse use cases.
We are using deep learning-based artificial intelligence in our content right now to proofread, translate, [perform] quality assurance (QA) and build graphics. Were also in the development stage on a number of initiatives, including the 5-Minute Crafts marketplace within the metaverse, he said. That could work by your avatar walking into TheSouls shopping mall-style building, watching a craft video, and going to the AI assistant to help you purchase the materials needed to complete the project.
Adrian McDermott, CTO at Zendesk, believes that in 2023, we can expect to see deep learning and AI technologies power and scale customer self-service in the metaverse.
Businesses will expand the use of AI and automation to route and escalate urgent user issues in real time, ensuring the experience remains seamless, McDermott told VentureBeat. Large language models (LLMs) will play a role in helping brands understand customer needs in these new spaces, as well as generating potential responses to service requests. Self-service powered by deep learning-based AI models can unburden human agents by helping customers sort through straightforward questions more easily, freeing agents up to dig into the more difficult cases.
McDermott said that we would begin to see industries beyond retail and gaming begin to build or pilot metaverse experiences to stay competitive. Brands will be using the metaverse to not only engage with customers, but to build loyalty through digital collectibles, and automation will play a role in that journey.
Dont be surprised to see not only an expansion of digital storefront and concert experiences, but also increased use by the enterprise for hosting meetings, training and upskilling employees on critical job skills, he said.
Likewise, Wilkens predicts that in 2023, we can expect brands to begin building communities around virtual influencers.
Brands will focus on developing more meaningful content to engage their virtual influencers communities in an effort to be more human and connect with audiences authentically, he said. Additionally, we expect to see a rise of avatars. They will be everywhere especially in the metaverse and will evolve dramatically on platforms like Snapchat due to new features like avatar fashion and digital items coming in 2023.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Read more from the original source:
How deep learning will ignite the metaverse in 2023 and beyond - VentureBeat
This New Artificial Intelligence (AI) Method is Trying to Solve the Memory Allocation Problem in Machine Learning Accelerators – MarkTechPost
Posted: at 12:20 am
Original post:
Top 10 Highly Paying Machine learning Jobs to Apply for in the New … – Analytics Insight
Posted: at 12:20 am
Many professionals like engineers have entered the field due to machine learnings rapid growth as well as the potential to create innovative new technology. There are high paying Machine Learning jobs in India that incorporate machine learning which is clear but there are many other Machine learning jobs in 2023 that will interest you. Machine learning is already influencing our daily lives and the choices we make, even if we are only beginning to explore its potential. And no signs of slowing down are seen. By 2027, the global market is anticipated to reach $117.19 billion. Additionally, learning-focused and professionally rewarding opportunities are available. As a result, engineers and academics are becoming much more interested in this sector. The top high paying machine learning jobs conferring to pay are mentioned below. This list has been updated, and no matter where you are in your career these Machine Learning jobs will assist you.
The duties of this senior-level post include serving as a mentor to the staff members of the data analytics and data warehousing divisions. The duty of arranging the technological, financial, and human resources to meet business needs falls to the director of analytics. The Chief Data Officers company gives the analytics director instructions on how to use data to produce the best results. This managerial and leadership position benefits greatly from characteristics of strategy and teamwork.
The principal scientist performs research in labs and develops creative, significant data science initiatives, making it one of the high-paying ML jobs. Making ensuring the team has the resources it needs to complete the given duties and do it effectively is another duty of this lead scientist. The main responsibilities of this position include leading cross-functional teams and coordinating with stakeholders. Principal scientists become one of the high-paying ML jobs in India due to the excessive and expanding demand.
As a computer scientist, you create and design software to address issues. In other words, this technological position involves building websites and mobile applications. To enable interactions between people and computers as well as between computers, computer scientists also create and evaluate mathematical models. One of the top ML jobs in India has always been this one because working with money, both your own and other peoples is the stuff of dreams.
Data scientists manage and interpret the constantly-generating data that characterizes the digital world. They must clean the data because it is rarely clean. Additionally, they must evaluate and extrapolate the data. They use a variety of statistical and machine-learning techniques to do this. For business decision-makers, the data scientists insights are of utmost importance. It is one of the fastest-paced machine learning careers in India making it a high paying Machine Learning job.
The core of data science is statistical data analysis. However, compared to data scientists, statisticians take a different approach to creating and testing models. Organizations can analyze quantitative data and identify potential trends thanks to statisticians analytical skills. It is one of the ML jobs with the best salaries available right now.
Data is fed into the theoretical models created by data scientists by ML engineers, one of the high-paying ML jobs in the world. They aid in the scaling procedure to produce models at the production level that can manage terabytes of real-time data. You would require solid knowledge of Scala, Python, and Java to start working as an ML developer. It is one of the high-paying ML jobs in India due to demand and income.
The main responsibility of research engineers is the creation of new technological goods. Through the use of research and the creation of engineering knowledge, these professionals enhance the current systems and procedures. Application architects obtain one of the high-paying ML jobs in India due to the excessive and expanding demand.
Working with deep learning architectures and image analysis algorithms is part of this job description. Engineers that specialize in computer vision use their analytical abilities to build platforms for image processing and visualization. Whoever is interested in this field should have great computer abilities.
The data systems that the MI and AI capabilities can run on are designed and built by data engineers. One of the top machine learning jobs in India has always been this one because working with money, both your own and other peoples, is the stuff of dreams.
Several aspects of computer algorithms, including their design, analysis, implementation, optimization, and experimental evaluation, are addressed by algorithm engineering. For this position, familiarity with software engineering applications of algorithms is necessary. Algorithm engineers now hold some of the high-paying ML jobs in India due to the excessive and rising demand.
Visit link:
Top 10 Highly Paying Machine learning Jobs to Apply for in the New ... - Analytics Insight
MIT xPRO launches programs with Simplilearn in Executive Leadership Principles and Machine Learning for Business, Engineering, And Science – Yahoo…
Posted: at 12:20 am
MIT xPRO launches two new programs in Leadership and Machine learning through Simplilearn
The programs, spanning for a period of 4 months, will be hosted using a blended format including masterclasses
SAN FRANCISCO, Dec. 27, 2022 /PRNewswire/ --MIT xPRO has announced two new upskilling programs in Executive Leadership Principlesand Machine Learning for Business, Engineering, and Science. Delivered through digital skills training platform Simplilearn, these programs leverage MIT xPRO's thought leadership in engineering and management developed over years of research, teaching, and practice as well as Simplilearn's dynamic, interactive, digital learning platform.
Simplilearn_Logo
The Executive Leadership Principles program is designed to enable learners to understand an array of organizational and leadership aspects. Some of the focus areas include organizational strategies and capabilities, applying influence, negotiation, conflict resolution, change management, problem solving, navigating culture and networks, as well as discovering and implementing leadership strengths. This Executive Program offers masterclasses taught by MIT faculty and instructors, assessments, case studies, and tools. It is best suited for early and mid-career professionals looking to advance their leadership and capabilities while on the job. Through this program, learners can benefit from an executive certificate of completion from MIT xPRO, 5 Continuing Education Units (CEUs) from MIT xPRO, scope to connect with an international community of professionals, as well as an opportunity to work on real-world projects. Eligibility criteria requires learners to have a graduate degree;they could be working professionals with technical or non-technical backgrounds.
The Machine Learning for Business, Engineering and Science program is designed to demystify machine learning through computational engineering principles and applications. It provides the opportunity to learn from MIT faculty, while connecting with an international community of professionals and working on projects based on real-world examples. Learners will gain the skills to apply their knowledge to various aspects of work using simulations, assessments, case studies, and tools. Learners get a chance to earn a Professional Certificate of completion and 10 Continuing Education Units (CEUs) from MIT xPRO. The program is best suited for professionals with bachelor's degrees in engineering, business or physical science who are interested in knowing about the application of Machine Learning across various domains.
Story continues
Mr. Anand Narayanan, Chief Product Officer, Simplilearn, said, "The need to upskill remains consistent and relevant for professionals across the board. In the dynamic workplace of today, it is imperative for professionals to be able to effectively complete tasks and solve problems strategically. Ensuring to map skills and constantly upgrading oneself to match industry requirements will ensure consistent professional growth. We are pleased to work with MIT xPRO to offer these programs in new-age skills enabling employees to upskill and achieve high-quality results in their workspace."
Announcing the launch, MIT xPRO says,"Students and professionals today are keen to regularly upskill and up their game when it comes to strengthening their careers. There is a need to stay abreast with industry developments and beopen and agile to change. In this regard, we are pleased to workwith Simplilearn to curate programs that are sure to provide in-depth and comprehensive knowledge, relevant to the dynamic industry shifts. We are confidentthat they will assist learners in achieving their career objectives."
About MIT xPRO
Technology is accelerating at an unprecedented pace causing disruption across all levels of business. Tomorrow's leaders must demonstrate technical expertise as well as leadership acumen in order to maintain a technical edge over the competition while driving innovation in an ever-changing environment.
MIT uniquely understands this challenge and how to solve it with decades of experience developing technical professionals. MIT xPRO's online learning programs leverage vetted content from world-renowned experts to make learning accessible anytime, anywhere. Designed using cutting-edge research in the neuroscience of learning, MIT xPRO programs are application focused, helping professionals build their skills on the job.
About Simplilearn
Founded in 2010 and based in San Francisco, California, and Bangalore, India, Simplilearn, a Blackstone companyis the world's #1 online Bootcamp for digital economy skills training. Simplilearn offers access to world-class work-ready training to individuals and businesses around the world. The Bootcamps are designed and delivered with world-renowned universities, top corporations, and leading industry bodies via live online classes featuring top industry practitioners, sought-after trainers, and global leaders. From college students and early career professionals to managers, executives, small businesses, and big corporations, Simplilearn's role-based, skill-focused, industry-recognized, and globally relevant training programs are ideal upskilling solutions for diverse career or/and business goals.For more information, please visit http://www.simplilearn.com/
Logo: https://mma.prnewswire.com/media/1100016/Simplilearn_Logo.jpg
Cision
View original content:https://www.prnewswire.com/news-releases/mit-xpro-launches-programs-with-simplilearn-in-executive-leadership-principles-and-machine-learning-for-business-engineering-and-science-301710226.html
SOURCE Simplilearn Solutions Private Limited
Originally posted here:
Machines are needed to find complex software problems, humans … – SiliconANGLE News
Posted: at 12:20 am
Finding rare events in software applications is one of the principal reasons artificial intelligence succeeds in increasingly complex environments, says a DevOps trouble automaton expert.
Its telling you this cluster of events is both unusual and unlikely to be random, said Ajay Singh (pictured left), founder and chief executive officer of Zebrium Inc., a machine learning analytics provider recently acquired by ScienceLogic Inc.
Singh and Michael Nappi (pictured right), chief product and engineering officer at ScienceLogic, spoke with theCUBE hostsJohn Furrier and Savannah Peterson at AWS re:Invent, during an exclusive broadcast on theCUBE, SiliconANGLE Medias livestreaming studio. They discussed advances is the processes for finding root causes of software problems. (* Disclosure below.)
The problem with traditional fault-finding is that humans cant scale quickly like data can, according to Singh. Thats because modern cloud applications, with the plethora of microservices, containers and so on are creating ever more complex environments. Thats all exacerbated through the increasing speed by which changes get rolled out. Software breaks, he said.
People develop new features within hours, push them out to production. The human has just no ability or time to understand whats normal. You need a machine, Singh explained.
You cant manage what you dont know about, added Nappi. Visibility, discoverability, understanding whats going on in a lot of ways, thats the really hard problem to solve. Thats where AI comes in, and Zebrium has its own specialized approach to things.
At its heart its classifying the event catalog of any application stack, Singh explained. Figuring out whats rare, when things start to break, its telling you this cluster of events is both unusual and unlikely to be random, indicating the root cause of the problem.
The process of identifying issues with more accuracy has changed as services have become more prevalent in information technology. You cant hire enough engineers to scale that kind of complexity. They use machine learning to tremendous effect to rapidly understand the root cause of an application failure, Nappi said of Zebriums AI approach.
Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of AWS re:Invent:
(* Disclosure: ScienceLogic Inc. sponsored this segment of theCUBE. Neither ScienceLogic nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Read more:
Machines are needed to find complex software problems, humans ... - SiliconANGLE News