Ethereum could experience bullish run, ETH ultrasound money narrative at risk – FXStreet
Posted: May 15, 2024 at 2:44 am
Ethereum (ETH) price action on Wednesday shows it could be gathering momentum for a rally as a recent report from CryptoQuant reveals its ultrasound money narrative is at risk. Also, the number one altcoin's co-founder, Vitalik Buterin, and co-authors have published Ethereum Improvement Proposal (EIP) 7702 as an alternative to EIP-3074.
Read more: Ethereum resume sideways move as Grayscale files to withdraw Ethereum futures ETF application with the SEC
Ethereum is set to see interesting updates on its blockchain following recent developments. Here are key market movers for the top altcoin:
A recent report by CryptoQuant highlights that Ethereum is becoming inflationary following the Dencun upgrade in March. After the Merge that saw Ethereum transitioning from a Proof-of-Work (PoW) to a Proof-of-Stake consensus mechanism, the network began burning gas fees at a significant rate that saw a significant reduction in the growth of ETH's circulating supply.
However, since the Dencun upgrade, when the introduction of "blobs" reduced Ethereum's network activity to boost its scalability and reduce user fees, the amount of gas fees burned has reduced significantly.
"Before the Dencun upgrade, the higher network activity on Ethereum meant higher fees burned and hence less ether supply. However, after the Dencun upgrade, the total amount of fees burned has decoupled from the network activity," noted CryptoQuant. As a result, this puts ETH's 'ultrasound' money narrative at risk since its circulating supply would begin increasing over time.
"We conclude that, at the current rate of network activity, Ethereum will not be deflationary again, and the narrative of 'ultrasound' money has probably died or would need much higher network activity to come back to life," said CryptoQuant.
Furthermore, following several concerns raised about EIP-3074, which was considered for inclusion in the upcoming Pectra upgrade, Ethereum co-founder Vitalik Buterin and co-authors Sam Wilson, Ansgar Dietrichs and Matt Garnett proposed EIP-7702 as an alternative.
EIP-7702 introduces a new transaction type that would allow externally owned accounts (EOAs) to temporarily act as smart contracts during a transaction. Like EIP-3074, the new proposal would enable better ways to abstract gas fees, batch transactions, and improve the user experience. However, it goes further by improving compatibility with the ERC-4337 standard for smart contract wallets an issue with EIP-3074 and being more future-proof.
Some crypto community members have speculated that EIP-7702 would replace EIP-3074 in the upcoming Ethereum Pectra upgrade. One X user commented, "I think it would probably make more sense to include in Pectra than 3074."
Fireblocks' VP of Technology Arik Galansky also commented on the proposal saying, "It's a long while since Ethereum introduced changes that are user-oriented rather than scaling-oriented and this one is right on point."
Also read: Ethereum traders show uncertainty, SEC delays decision on Invesco's ETH ETF application
Meanwhile, the Ethereum Foundation appeared to be dumping ETH again on Wednesday following a transfer of 1,000 ETH worth $3 million to middle multisig address 0xbc9, according to data from Spot On Chain.
This transfer is the latest action in a larger trend where the Ethereum Foundation has sold 1,766 ETH for 4.81 million DAI stablecoin at ~$2,725 since the beginning of 2024. The ETH sales, done in small batches through the same middle multisig address 0xbc9, often coincide with brief Ethereum price declines, noted Spot On Chain.
The market remained quiet on Wednesday as Ethereum continued trading around the $3,000 mark. Despite the calm market, ETH derivatives volume in May has risen higher close to 60% than that of Bitcoin, according to QCP.
Read more: Ethereum could see a brief rally despite Michael Saylor's jab at ETH ETFs
The reason for the increase may be the market pricing in volatility over the Securities & Exchange Commission's (SEC) upcoming decision on Van Ecks' spot ETH ETF application on May 23, noted QCP. This also explains why the market is slightly tilted toward a downward movement as many expect the SEC to deny spot ETH ETF applications.
While current price action indicates ETH may continue a horizontal movement in upcoming weeks, historical data suggests the largest altcoin could be gathering momentum for a bullish run. Considering ETH has remained in the $2,852 to $3,300 range for nearly a month, it could attempt a sustained breakout above the upper level of the range if it sees a slight bullish trigger.
ETH/USDT 4-hour chart
Onchain data also shows most whales have been accumulating ETH around this range in anticipation of a price rally. A bearish event could see it break below the lower level of the range briefly, presenting a buying opportunity.
Ethereum is trading at $3,013 on Wednesday, May 8, down 0.7% on the day.
Ethereum is a decentralized open-source blockchain with smart contracts functionality. Serving as the basal network for the Ether (ETH) cryptocurrency, it is the second largest crypto and largest altcoin by market capitalization. The Ethereum network is tailored for scalability, programmability, security, and decentralization, attributes that make it popular among developers.
Ethereum uses decentralized blockchain technology, where developers can build and deploy applications that are independent of the central authority. To make this easier, the network has a programming language in place, which helps users create self-executing smart contracts. A smart contract is basically a code that can be verified and allows inter-user transactions.
Staking is a process where investors grow their portfolios by locking their assets for a specified duration instead of selling them. It is used by most blockchains, especially the ones that employ Proof-of-Stake (PoS) mechanism, with users earning rewards as an incentive for committing their tokens. For most long-term cryptocurrency holders, staking is a strategy to make passive income from your assets, putting them to work in exchange for reward generation.
Ethereum transitioned from a Proof-of-Work (PoW) to a Proof-of-Stake (PoS) mechanism in an event christened The Merge. The transformation came as the network wanted to achieve more security, cut down on energy consumption by 99.95%, and execute new scaling solutions with a possible threshold of 100,000 transactions per second. With PoS, there are less entry barriers for miners considering the reduced energy demands.
Loading ...
See the rest here:
Ethereum could experience bullish run, ETH ultrasound money narrative at risk - FXStreet
Digital Education Market Flourishes with Rising Demand for Online Learning Platforms, Personalized Learning… – WhaTech
Posted: at 2:44 am
Digital Education Market expands with the growing demand for flexible, personalized, and accessible learning solutions across diverse educational sectors.
Digital Education MarketScope and Overview
The digital revolution has transformed many aspects of our lives, including how we learn. TheDigital Education Markethas witnessed exponential growth, fueled by advancements in technology and changing learning preferences.
The report delves into the landscape of the Digital Education Market, analyzing its competitive dynamics, market segmentation, regional trends, key growth drivers, strengths, recession impact, and a concluding perspective.
In the evolving landscape of learning and knowledge acquisition, the Digital Education Market emerges as a transformative force reshaping how individuals access, consume, and engage with educational content, driving accessibility, inclusivity, and lifelong learning opportunities. With the proliferation of digital technologies and the increasing demand for flexible and personalized learning experiences, educational institutions, corporations, and edtech startups leverage digital education solutions to deliver interactive, engaging, and scalable learning experiences to learners of all ages and backgrounds.
The Digital Education Market offers a diverse array of solutions, including online courses, virtual classrooms, learning management systems (LMS), and educational apps, empowering learners to access educational resources anytime, anywhere, and on any device. By providing multimedia content, adaptive learning algorithms, and social collaboration features, digital education solutions enable personalized learning pathways, interactive assessments, and real-time feedback, fostering learner engagement, retention, and academic success.
As educational stakeholders prioritize innovation and digital literacy as strategic imperatives, the Digital Education Market becomes the strategic imperative for building resilient, inclusive, and future-ready educational ecosystems that empower individuals to thrive in a knowledge-driven society.
Get a Report Sample of Digital Education Market @www.snsinsider.com/sample-request/1958
Competitive Analysis
Major players such as Coursera, edX, Pluralsight, Brain4ce Education Solutions, Udacity, Udemy, Miriadax, Jigsaw Academy, Iversity, Intellipaat, and others are leading the charge in the Digital Education Market. Each player brings unique strengths, content offerings, and delivery methods to cater to diverse learning needs across various domains.
Market SegmentationAnalysis
On The Basis of Course Type:
On The Basis of Learning Type:
On The Basis of End-User:
Regional Outlook
The Digital Education Market exhibits global reach and adoption, with North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa emerging as key regions driving market growth. North America is among the leaders inthe market, buoyed by a robust technology infrastructure, high internet penetration rates, and strong demand for lifelong learning and professional development.
Europe follows suit, supported by favorable government policies, multilingual content offerings, and a growing emphasis on digital skills development. Asia Pacific presents significant growth opportunities, fueled by rising smartphone adoption, expanding e-learning markets, and government initiatives to promote digital literacy and education.
Latin America and the Middle East & Africa also showcase potential for market expansion, driven by increasing investments in education technology, online learning platforms, and public-private partnerships to address educational inequalities and workforce development challenges.
Key Growth Drivers of the Market
Strengths of the Market
Check for Discount @www.snsinsider.com/discount/1958
Impact of the Recession
Economic downturns and budget constraints may temporarily affect discretionary spending on education and training, leading to short-term challenges for digital education providers. However, the recession also underscores the importance of education, skills development, and lifelong learning as drivers of economic resilience, innovation, and workforce productivity.
Organizations and individuals prioritize investments in digital education solutions to adapt to remote work environments, enhance employability, and seize new opportunities in emerging industries and technologies.
Key Objectives of the Market Research Report
Conclusion
In conclusion, the Digital Education Market represents a transformative force in the global education landscape, offering unparalleled opportunities for lifelong learning, skills development, and knowledge dissemination. As digital technologies continue to reshape traditional education paradigms, digital education platforms play a pivotal role in democratizing access to education, fostering innovation, and empowering individuals and organizations to thrive in the digital age.
By embracing innovation, collaboration, and inclusivity, stakeholders in the Digital Education Market can unlock the full potential of digital learning to create a more equitable, accessible, and sustainable future for education worldwide.
Buy the Latest Version of this Report@www.snsinsider.com/checkout/1958
Table of Contents- Major Key Points
Go here to see the original:
Digital Education Market Flourishes with Rising Demand for Online Learning Platforms, Personalized Learning... - WhaTech
Emerging Trends in Computer Engineering: How AI and Machine Learning Are Shaping the Future – Digital Information World
Posted: at 2:44 am
Computer engineering is evolving at a breakneck pace, with artificial intelligence (AI) and machine learning (ML) driving some of the most significant innovations. From healthcare and finance to transportation and entertainment, these technologies transform how we live, work, and interact. Understanding emerging trends in computer engineering and their impact on various sectors can help professionals stay ahead in this competitive field. In this article, we'll explore how AI and ML are shaping the future of computer engineering, highlighting key trends and their implications.
Edge computing is emerging as a game-changer in computer engineering. It processes data closer to its source rather than relying solely on centralized cloud servers. This approach reduces latency and improves data security. AI at the edge enhances efficiency, enabling real-time analytics in applications like autonomous vehicles, smart cities, and industrial automation.
By integrating AI and ML algorithms into edge devices, engineers can create intelligent systems that adapt and respond to their environments. This trend will continue as more industries recognize the benefits of decentralized computing, making it a pivotal area for innovation.
As AI and ML redefine computer engineering, professionals need advanced education to stay relevant. An online MS in electrical and computer engineering offers a flexible way to gain specialized skills in these areas. This program blends theoretical knowledge with practical applications, covering topics like deep learning, neural networks, and robotics.
Graduates equipped with this expertise can tackle complex engineering challenges and develop cutting-edge AI solutions. Additionally, online programs offer the flexibility needed for working professionals to upskill while continuing their careers. Investing in advanced education ensures engineers remain competitive in this rapidly changing field.
Deep learning, a subset of machine learning, uses artificial neural networks to analyze and predict complex patterns in data. Recent advancements in deep learning have unlocked unprecedented capabilities in image recognition, natural language processing, and predictive analytics.
Computer engineers leverage these advancements to build intelligent systems that outperform traditional models in accuracy and speed. For example, convolutional neural networks (CNNs) have revolutionized computer vision, enabling applications like facial recognition and autonomous navigation. Similarly, recurrent neural networks (RNNs) excel in speech recognition and language translation. Deep learning's potential remains vast, promising continued innovation in AI-driven analytics.
As AI becomes more integrated into daily life, ethical considerations around bias and transparency grow increasingly important. Biased algorithms can lead to discriminatory outcomes in hiring, lending, and law enforcement. Addressing these challenges requires computer engineers to prioritize ethical design principles.
Efforts to mitigate bias include diversifying training data, developing explainable AI models, and implementing fairness metrics. Engineers also need to ensure transparency in AI systems, making decision-making processes understandable to users. Incorporating ethics into the AI development process builds trust and ensures technology benefits all segments of society.
AutoML, or automated machine learning, is simplifying the complex process of developing ML models. It automates tasks like data preprocessing, feature selection, and hyperparameter tuning, allowing users with little programming knowledge to build effective models.
This trend democratizes ML by enabling professionals in non-technical roles to harness its power. AutoML tools can help organizations accelerate ML adoption, making data-driven decision-making more accessible across various departments. However, computer engineers still play a crucial role in designing and refining these tools to ensure they produce accurate and reliable models.
Cybersecurity is an ever-evolving challenge, with new threats emerging daily. AI and ML provide innovative solutions for detecting and mitigating these risks. ML algorithms analyze vast amounts of network data to identify patterns and anomalies that may indicate cyberattacks.
For instance, anomaly detection models can flag unusual login attempts or data transfers, while AI-driven threat intelligence platforms predict future attack vectors. Additionally, AI enhances incident response by automating threat containment and recovery processes. As cyber threats grow in sophistication, integrating AI into cybersecurity strategies becomes essential.
Federated learning is a novel approach that enables AI models to learn from decentralized data without transferring it to a central server. This method enhances data privacy and security by keeping sensitive information on local devices.
In healthcare, for example, federated learning allows hospitals to collaboratively train AI models on patient data without sharing it directly. This collaboration improves diagnostic accuracy while safeguarding patient privacy. Computer engineers working on federated learning face challenges like model optimization and communication efficiency, but the potential benefits make it a promising trend for AI development.
Explainable AI (XAI) aims to make complex AI models more interpretable, allowing users to understand and trust their decisions. As AI systems become more sophisticated, they often act as black boxes, providing accurate predictions without clear reasoning.
XAI techniques like feature importance analysis and visualization help demystify these models, revealing how inputs influence predictions. For instance, in healthcare, XAI can explain why an AI model diagnosed a particular condition, enabling doctors to validate and refine treatment plans. Building transparent AI systems strengthens user trust and facilitates adoption across sensitive applications.
AI and machine learning continue to reshape computer engineering, driving innovation across various industries. From edge computing to AutoML, these trends unlock new possibilities while presenting unique challenges. Computer engineers equipped with advanced skills, ethical principles, and a collaborative mindset can navigate this evolving landscape successfully. By understanding and embracing these emerging trends, professionals can contribute to building a future where AI empowers businesses, improves lives, and drives sustainable growth.
Continued here:
Emerging Trends in Computer Engineering: How AI and Machine Learning Are Shaping the Future - Digital Information World
Here’s how College Vidya is building the Amazon of online education – YourStory
Posted: at 2:44 am
The rise of online education in India has been meteoric, with the market poised to reach $10.2 billion by 2025, according to the HolonIQ report. This surge is driven by the growing recognition of online degrees as flexible educational options that allow for professional skill-building alongside higher education.
As working professionals seek to upskill themselves, the overwhelming array of choices combined with limited time makes it challenging to select the right course and right education partner.
Enter College Vidya, the 'Amazon' of online education. With the ability to compare over 30 critical factorsincluding authentic reviews, fee structures, faculty, and placement partnersacross 500+ accredited courses and 100+ online universities/edtech platforms, College Vidya offers transparency, extensive choice, and detailed insights, empowering users in their decision-making process.
In 2019, Rohit Gupta founded College Vidya, driven by a vision to transform the landscape of online education. From his modest start as a canopy boy to receiving the top online education advisory platform from Outlook and leading a company with revenues exceeding Rs 30 crore, Gupta's journey is nothing short of inspirational.
During an interview when asked, "Where do you see yourself in the next five years?" Gupta was prompted to reflect on the broader issue of career guidance, realising that such pivotal conversations often occur too late in ones educational journey. Recognising that over 80% of students in India struggle to choose the right career patha challenge he himself had facedhe founded College Vidya. The platform aims to provide clarity and trust from the start, empowering students and working professionals to make informed decisions with facts at their fingertips and support from expert career counsellors.
Today, College Vidya not only reflects Guptas transformative vision but also combats misinformation and profiteering in the online education sector. With over 500 experienced counsellors, the platform assists in choosing the best online university for career aspirations, ensuring that educational decisions are both informed and impactful, recognising the lasting impact these choices have on ones future.
The process starts when users visit the official website and engage with the "Suggest Me in 2 Mins" feature. This AI-powered tool personalises the search based on detailed inputs about the users educational and professional preferences, including course preference, budget, and time commitments.
From here, users can compare potential online universities using a detailed list that filters over 30 different parameters, including course fees, student reviews, and placement statistics. The platforms blog offers deeper insights into courses and institutions, enhancing the decision-making process. Additionally, once enrolled, users receive admission assistance along with post-admission services such as dedicated mentorship throughout the degree completion, an active online community, and career support.
The platform caters specifically to a diverse range of demographics, from working professionals looking to upskill without pausing their careers to traditional students in undergraduate and postgraduate programs who benefit from the flexibility of online learning. It also supports those pursuing dual degreesone of which is onlineas well as non-traditional learners such as government job aspirants and individuals with physical disabilities, ensuring diverse educational needs are met.
For these users, College Vidya is not just a tool for finding the right academic programme but also a gateway to necessary resources and community support through the CV Community. This unique feature fosters networking, access to internships, job opportunities, and even industry webinars, effectively supporting users from admission to placement.
College Vidya stands out in the online education space with its exclusive focus on digital learning, distinguishing itself from other platforms that mix online and offline educational resources. True to its tagline ChunoApnaSahi (choose whats right for you), it features over 100 UGC-DEB, NAAC, AICTE and NIRF online universities and edtech platforms, allowing students to compare and select the best options for their needs meticulously.
The platform's academic counselling is unbiased and student-centric, supported by seasoned counsellors who leverage advanced technologies and AI-powered tools to personalise the user experience. These technologies ensure that each student receives recommendations tailored to their specific educational goals.
What makes College Vidya particularly unique is its commitment to providing all services free of charge. Users do not incur any fees for platform use or through it to universities. This enhances accessibility and underscores the platform's dedication to education over profit.
By 2030, its estimated that Indias higher education will emerge as the single largest provider of global talent with one in four graduates in the world being a product of the Indian higher education system. In tandem with this, Gupta articulates a strong vision for the future of education in India, emphasizing the transformative power of online learning. "We firmly believe that online education will overtake traditional modes as it integrates practical work experience with theoretical knowledge, which students can pursue over weekends or according to their convenience, he says.
He adds that by 2025 they aim to empower 500,000 students, supporting them through pre-admission and post-admission stages with the CV Community and College Vidya LinkedIn page. The vision is to ensure every student from across the country has the tools they need to succeed in their educational and professional endeavours.
Read more:
Here's how College Vidya is building the Amazon of online education - YourStory
7 Innovative Techniques Used by Top Online Science Tutors – Intelligent Living
Posted: at 2:44 am
Access to top-notch tutoring can make a significant difference for students seeking to excel in their science studies. Despite the rise of online education, many parents and students are concerned about the effectiveness of virtual tutoring compared to traditional in-person sessions. This comprehensive article addresses these concerns by highlighting the innovative techniques employed by the best online science tutors to deliver a highly engaging and personalized learning experience.
Virtual labs and simulations have become game-changers in online science education. They enable studentsto engage in hands-on experimentation without the need for physicallab equipment. These interactive platforms offer a safe and cost-effective alternative to traditional lab settings, allowing students to conduct experiments, manipulate variables, and observe real-time results in a controlled virtual environment.
Oneof the keyadvantagesof virtual labsis their ability tosimulate scenarios that may be impractical or even dangerous in real-life settings.For instance, students can explore chemical reactions involving hazardous substances, observe astronomical phenomena, or study the behavior of subatomic particles, all within a secure digital space.
Moreover, virtual labs often include features that enhance the learning experience, such as step-by-step guidance, interactive tutorials, and real-time data visualization tools. These features support students in understanding complex processes and foster critical thinking and problem-solving skills as they navigate through simulated scenarios.
Every student learns at a different pace and has unique strengths and weaknesses. Toponline science tutorsrecognize this diversity and use adaptive learning techniques to personalize the learning experience. By leveraging data-driven algorithms and assessment tools, these tutors can identify knowledge gaps, learning styles, and areas of struggle for each student.
Based onthese insights, tutors canadapt their teaching methods, pace, and content delivery to align with the students needs.They may introduce alternative explanations, provide additional examples, or employ different pedagogical approaches to ensure concepts are thoroughly understood. Adaptive learning techniques enable a more efficient and effective learning journey, catering to individual requirements and maximizing student engagement.
Science can sometimesbeperceivedasdry or intimidating, especially for students who struggle to connect with abstract concepts.To combat this challenge, top online science tutors have embraced gamification and interactive activities to make learning more enjoyable and engaging.
Through educational games, quizzes, and interactive simulations, tutors can present complex topics in a fun and interactive manner. Students can participate in virtual experiments, solve puzzles, or compete in science-themed games, all while reinforcing key concepts. Gamification taps into students natural inclination for play and competition, fostering motivation, retention, and a deeper appreciation for science.
Collaborative learning platforms have revolutionized the way students interact and learn from one another in an online setting. Top online science tutors leverage these platforms to create virtual classrooms where students can engagein group discussions, share insights, and collaborate on projects.
One key benefit of collaborative learning is exposure to diverse perspectives and approaches. Through peer-to-peer interaction, students can learn from each others strengths, challenge their assumptions, and develop a deeper understanding of scientific concepts.
Collaborative platforms often includefeatures such asshared whiteboards, real-time document editing, video conferencing, and chat functionalities, enabling seamless communication and collaboration among students and tutors. Tutors act as facilitators, guiding the discussions, providing feedback, and ensuringthat thecollaborative process remains productive and engaging.
Through collaborative learning, studentsnot onlyenhance their subject knowledgebut alsodevelopessential soft skillssuch as teamwork, communication, and problem-solving, which are invaluable in both academic and professional settings.
Science often involves abstract and complex concepts thatcan be challenging to grasp through text orverbal explanations alone. Top online science tutors understand the power of multimedia resources and visual aids in making these concepts more accessible and engaging.
They use high-quality videos, animations, interactive diagrams, and 3D modelsto visually represent scientific processes, structures, and phenomena.These multimedia resourcesnot onlycapture students attentionbut alsocater to different learning styles,making complex topics more tangible andeasierto comprehend.
The flipped learning approach is a powerful technique that top online science tutors employ to maximize the effectiveness of their sessions. Instead of delivering lectures during tutoring sessions, tutors provide students with multimedia resources, readings, and pre-recorded videos to explore concepts independently before the session.
During the tutoring session, students arrive prepared with questions, areas of confusion, and a basic understanding of the topic. The tutor can then address specific challenges, facilitate discussions, and guide students through higher-level applications and problem-solving scenarios.This approach empowers students to take ownership of their learning,fostering self-directed study habits and promoting active engagement during tutoring sessions.
Top online science tutors understand the importance of continuous feedback and assessment in optimizingthelearningprocess.They use various assessment tools, including quizzes, assignments, and interactive exercises, toregularlyevaluate students progress and identify areas needing further reinforcement.
Based on these assessments, tutors can provide targeted feedback, adjust their teaching strategies, and tailor their approach to address specific gaps or misconceptions. This continuous cycle ofassessmentand feedback ensures that students stay on track, receive timely guidance, and progressat an optimal pacethroughout their learning journey.
Top online science tutors are embracing new technologies and teaching methods. They are changing how students learn and engage with science. They include virtual labs and simulations. They also have collaborative learning platforms and multimedia resources. These techniques make learning more engaging and accessible.
They also cater to diverse learning styles andindividual needs. These skilled tutors can guide students. With them, students can start a personalized and enriching learning journey. It will foster a deep understanding and appreciation for the wonders of science.
Originally posted here:
7 Innovative Techniques Used by Top Online Science Tutors - Intelligent Living
Google supercharges Chrome’s omnibox address bar with machine learning – TechSpot
Posted: May 5, 2024 at 2:42 am
Why it matters: Google is supercharging the address bar of its popular web browser with machine-learning capabilities. Known as the "omnibox" since it pulls double duty as both a URL entry field and search box, this unassuming text field is about to get a major upgrade.
The omnibox has evolved well beyond its humble beginnings as a place to type website addresses. It can now handle all sorts of queries and tasks by leveraging Google's vast search prowess. However, the suggestions and results it surfaces have been driven by a relatively rigid "set of hand-built and hand-tuned formulas." That's all about to change.
In a recent post on the Chromium blog, Justin Donnelly, the engineering lead for Chrome's omnibox, revealed that his team has been hard at work adapting machine learning models to drastically improve the omnibox's "relevance scoring" capabilities. In other words, omnibox will get much better at understanding the context behind your queries and providing more useful suggestions tailored to your needs.
According to Donnelly, when he surveyed colleagues on how to enhance the omnibox experience, improving the scoring system topped the wishlist. While the current rule-based approach works for a vast number of cases, it lacks flexibility and struggles to adapt to new scenarios organically. Enter machine learning.
By analyzing massive datasets of user interactions, browsing patterns, and historical data points like how frequently you visit certain sites, the new AI models can generate far more nuanced relevance scores. For instance, it learned that if you swiftly leave a webpage, chances are it wasn't what you were looking for, so suggestions for that URL get demoted.
As you use the smarter omnibox over time across Windows, Mac, and ChromeOS, it will continue refining and personalizing its suggestions based on your evolving interests and habits. Donnelly's team also plans to explore incorporating time-of-day awareness, specialized models for different user groups like mobile or enterprise, and other contextual signals.
Of course, enabling such deep personalization requires handing over more personal browsing data to Google's machine-learning models. How comfortable you are with that trade-off is a personal decision.
Google has been gradually rolling out these omnibox improvements over recent Chrome updates, with the machine learning models really flexing their muscles starting with version M124 expected in the coming months. And while not mentioned in the blog post, it's safe to assume the update would trickle down to mobile as well eventually.
See the original post here:
Google supercharges Chrome's omnibox address bar with machine learning - TechSpot
A machine-learning method isolating changes in wrist kinematics that identify age-related changes in arm movement … – Nature.com
Posted: at 2:42 am
Carmeli, E., Patish, H. & Coleman, R. The aging hand. J. Gerontol. A Biol. Sci. Med. Sci. 58, M146M152 (2003).
Article Google Scholar
Markov, N. T. et al. Age-related brain atrophy is not a homogenous process: Different functional brain networks associate differentially with aging and blood factors. Proc. Natl. Acad. Sci. 119, e2207181119 (2022).
Article CAS PubMed PubMed Central Google Scholar
Lenka, A. & Jankovic, J. Tremor syndromes: An updated review. Front. Neurol. 12, 684835 (2021).
Article PubMed PubMed Central Google Scholar
Tse, W. et al. Prevalence of movement disorders in an elderly nursing home population. Arch. Gerontol. Geriatr. 46, 359366 (2008).
Article PubMed Google Scholar
Hess, C. W. & Pullman, S. L. Tremor: Clinical phenomenology and assessment techniques. Tremor Other Hyperkinetic Mov. 2, 02 (2012).
Article Google Scholar
Zesiewicz, T. Overview of essential tremor. Neuropsychiatr. Disease Treat. 6, 401 (2010).
Article CAS Google Scholar
Veluvolu, K. C. & Ang, W. T. Estimation and filtering of physiological tremor for real-time compensation in surgical robotics applications. Int. J. Med. Robot. Comput. Assist. Surg. 6, 334342 (2010).
Article CAS Google Scholar
Lewis, R. D. & Brown, J. M. Influence of muscle activation dynamics on reaction time in the elderly. Eur. J. Appl. Physiol. 69, 344349 (1994).
Article CAS Google Scholar
Inglin, B. & Woollacott, M. Age-related changes in anticipatory postural adjustments associated with arm movements. J. Gerontol. 43, M105M113 (1988).
Article CAS PubMed Google Scholar
Veluvolu, K. C. & Ang, W. T. Estimation of physiological tremor from accelerometers for real-time applications. Sensors 11, 30203036 (2011).
Article ADS PubMed PubMed Central Google Scholar
Anouti, A. & Koller, W. C. Tremor disorders. Diagnosis and management. Western J. Med. 162, 510 (1995).
CAS Google Scholar
Elias, W. J. & Shah, B. B. Tremor. JAMA 311, 948954. https://doi.org/10.1001/jama.2014.1397 (2014).
Article CAS PubMed Google Scholar
Marshall, J. The effect of ageing upon physiological tremor. J. Neurol. Neurosurg. Psychiatry 24, 1417 (1961).
Article CAS PubMed PubMed Central Google Scholar
Sturman, M. M., Vaillancourt, D. E. & Corcos, D. M. Effects of aging on the regularity of physiological tremor. J. Neurophysiol. 93, 30643074 (2005).
Article PubMed Google Scholar
Morrison, S., Newell, K. M. & Kavanagh, J. J. Differences in postural tremor dynamics with age and neurological disease. Exp. Brain Res. 235, 17191729 (2017).
Article PubMed Google Scholar
Baizabal-Carvallo, J. F. & Morgan, J. C. Drug-induced tremor, clinical features, diagnostic approach and management. J. Neurol. Sci. 435, 120192 (2022).
Article PubMed Google Scholar
Deuschl, G. et al. The clinical and electrophysiological investigation of tremor. Clin. Neurophysiol. 136, 93129 (2022).
Article PubMed Google Scholar
Wyne, K. T. A comprehensive review of tremor. J. Am. Acad. Phys. Assist. 18, 4350 (2005).
Google Scholar
De, A., Bhatia, K. P., Volkmann, J., Peach, R. & Schreglmann, S. R. Machine learning in tremor analysis: Critique and directions. Mov. Disord. 38, 717731. https://doi.org/10.1002/mds.29376 (2023).
Article PubMed Google Scholar
Evers, L. J., Krijthe, J. H., Meinders, M. J., Bloem, B. R. & Heskes, T. M. Measuring parkinsons disease over time: The real-world within-subject reliability of the mds-updrs. Mov. Disord. 34, 14801487 (2019).
Article PubMed PubMed Central Google Scholar
Winder, J. Y., Roos, R. A., Burgunder, J.-M., Marinus, J. & Reilmann, R. Interrater reliability of the unified Huntingtons disease rating scale-total motor score certification. Mov. Disord. Clin. Pract. 5, 290295 (2018).
Article PubMed PubMed Central Google Scholar
Randall, J. E. & Stiles, R. N. Power spectral analysis of finger acceleration tremor. J. Appl. Physiol. 19, 357360 (1964).
Article CAS PubMed Google Scholar
Mamorita, N., Iizuka, T., Takeuchi, A., Shirataka, M. & Ikeda, N. Development of a system for measurement and analysis of tremor using a three-axis accelerometer. Methods Inf. Med. 48, 589594 (2009).
Article CAS PubMed Google Scholar
Niazmand, K., Kalaras, A., Dai, H. & Lueth, T.C. Comparison of methods for tremor frequency analysis for patients with parkinsons disease, in 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI), vol.2, 693697 (2011).
Li, J. et al. A wearable multi-segment upper limb tremor assessment system for differential diagnosis of Parkinsons disease versus essential tremor. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 33973406 (2023).
Article ADS PubMed Google Scholar
Gajewski, J., Mazur-Rzycka, J., Grski, M. & Busko, K. Reference values of the forearm tremor power spectra for youth athletes. J. Hum. Kinet. 86, 133143 (2023).
Article PubMed PubMed Central Google Scholar
Chan, P. Y. et al. Motion characteristics of subclinical tremors in Parkinsons disease and normal subjects. Sci. Rep. 12, 4021 (2022).
Article ADS CAS PubMed PubMed Central Google Scholar
Luft, F. et al. A power spectral density-based method to detect tremor and tremor intermittency in movement disorders. Sensors 19, 4301 (2019).
Article ADS PubMed PubMed Central Google Scholar
Ali, S. M. et al. Wearable sensors during drawing tasks to measure the severity of essential tremor. Sci. Rep. 12, 5242 (2022).
Article ADS CAS PubMed PubMed Central Google Scholar
Karamizadeh, S., Abdullah, S.M., Halimi, M., Shayan, J. & Rajabi, M.J. Advantage and drawback of support vector machine functionality, in 2014 International Conference on Computer, Communications, and Control Technology (I4CT) (2014).
Peng, H., Long, F. & Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27, 12261238 (2005).
Article PubMed Google Scholar
Bharathi, A. & Natarajan, A. M. Cancer classification using support vector machines and relevance vector machine based on analysis of variance features. J. Comput. Sci. 7, 13931399 (2011).
Article Google Scholar
Zhao, Z., Zhang, R., Cox, J., Duling, D. & Sarle, W. Massively parallel feature selection: An approach based on variance preservation. Mach. Learn. 92, 195220 (2013).
Article MathSciNet Google Scholar
AlFatih AbilFida, M., Ahmad, T. & Ntahobari, M. Variance threshold as early screening to boruta feature selection for intrusion detection system, in 2021 13th International Conference on Information & Communication Technology and System (ICTS), 4650 (2021).
SitiAmbarwati, Y. & Uyun, S. Feature selection on magelang duck egg candling image using variance threshold method, in 2020 3rd International Seminar on Research of Information Technology and Intelligent Systems (ISRITI), 694699 (2020).
Wu, Q.Q., Wang, Y.D., Wang, Z.Y. & Liu, K.H. Unsupervised feature selection algorithm based on spectral clustering and analysis of variance, in Future Information Engineering and Manufacturing Science, 175178 (2015).
Cheng, W., Wang, T., Wen, W., Li, J. & Gao, R. X. Mathematical methods and modeling in machine fault diagnosis. Math. Probl. Eng. 2014, 18 (2014).
Article Google Scholar
Lv, C. et al. A classification feature optimization method for remote sensing imagery based on fisher score and MRMR. Appl. Sci. 12, 8845 (2022).
Article CAS Google Scholar
Sakar, C. O., Kursun, O. & Gurgen, F. A feature selection method based on kernel canonical correlation analysis and the minimum redundancy-maximum relevance filter method. Exp. Syst. Appl. 39, 34323437 (2012).
Article Google Scholar
Hua, Z. et al. A feature dimensionality reduction strategy coupled with an electronic nose to identify the quality of egg. J. Food Process. Eng. 44, e13873 (2021).
Article Google Scholar
Fozard, J. L., Vercruyssen, M., Reynolds, S. L., Hancock, P. A. & Quilter, R. E. Age differences and changes in reaction time: The baltimore longitudinal study of aging. J. Gerontol. 49, P179P189 (1994).
Article CAS PubMed Google Scholar
Nelson, R. J., McCandlish, C. A. & Douglas, V. D. Reaction times for hand movements made in response to visual versus vibratory cues. Somatos. Motor Res. 7, 337352 (1990).
Article CAS Google Scholar
Robinovitch, S. N., Normandin, S. C., Stotz, P. & Maurer, J. D. Time requirement for young and elderly women to move into a position for breaking a fall with outstretched hands. J. Gerontol. A Biol. Sci. Med. Sci. 60, 15531557 (2005).
Article PubMed Google Scholar
Kanekar, N. & Aruin, A. S. The effect of aging on anticipatory postural control. Exp. Brain Res. 232, 11271136 (2014).
Article PubMed PubMed Central Google Scholar
Bao, T. et al. Vibrotactile display design: Quantifying the importance of age and various factors on reaction times. PLOS ONE 14, e0219737 (2019).
Article CAS PubMed PubMed Central Google Scholar
Barela, J. A., Rocha, A. A., Novak, A. R., Fransen, J. & Figueiredo, G. A. Age differences in the use of implicit visual cues in a response time task. Braz. J. Motor Behav. 13, 8693 (2019).
Article Google Scholar
Ivosev, G., Burton, L. & Bonner, R. Dimensionality reduction and visualization in principal component analysis. Anal. Chem. 80, 49334944 (2008).
Article CAS PubMed Google Scholar
Thompson, C. G., Kim, R. S., Aloe, A. M. & Becker, B. J. Extracting the variance inflation factor and other multicollinearity diagnostics from typical regression results. Basic Appl. Soc. Psychol. 39, 8190 (2017).
Article Google Scholar
Nizamis, K. et al. Characterization of forearm muscle activation in Duchenne muscular dystrophy via high-density electromyography: A case study on the implications for myoelectric control. Front. Neurol. 11, 231 (2020).
Article PubMed PubMed Central Google Scholar
Shelton, J. & Kumar, G. P. Comparison between auditory and visual simple reaction times. Neurosci. Med. 01, 3032 (2010).
Article Google Scholar
Rahimi, F., Bee, C., South, A., Debicki, D. & Jog, M. Variability of hand tremor in rest and in posture-a pilot study, in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 470473 (IEEE, 2011).
Rahimi, F. et al. Dynamic decomposition of motion in essential and parkinsonian tremor. Can. J. Neurol. Sci. 42, 116124 (2015).
Article PubMed Google Scholar
Wang, J., Gupta, S.K. & Barry, O. Towards data-driven modeling of pathological tremors, in International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 83914, V002T02A030 (American Society of Mechanical Engineers, 2020).
Fajardo, J. Jr. & Melo, L. F. D. Towards a modular pathological tremor simulation system based on the Stewart platform. Sensors 23, 9020 (2023).
See the rest here:
Snap Announces new Augmented Reality and Machine Learning Tools for Brands – Branding in Asia Magazine
Posted: at 2:42 am
Snap has announced new solutions, programs, and content partnerships for advertisers to connect with Snapchats audience.
The company revealed a series of augmented reality (AR) and machine learning (ML) tools to help brands and advertisers engage users on the network with interactive experiences.
With AR Extensions, Snap said it will be enhancing the way Snapchatters experience ads, enabling advertisers to integrate AR Lenses and Filters directly into all of ad formats, including Dynamic Product Ads, Snap Ads, Collection Ads, Commercials, and Spotlight Ads.
Advertisers can showcase their products and IP and share their branded world with Snapchatters through augmented reality directly through their ads.
Snap said it has been investing in Machine Learning and automation to make AR asset creation faster and easier and the company is now able to reduce the time it takes to create AR try-on assets at scale and help brands turn 2D product catalogs into try-on experiences.
With ML Face Effects, marketers can now create branded AR ads with Generative AI technology that allows custom-produced Lenses. This enables brands to generate a unique machine-learning model quickly, create realistic face effects, and generate selfie experiences for Snapchatters.
The company has evolved its 523 creator accelerator program by partnering with award-winning actress, writer, and producer Issa Rae and her branded entertainment company Ensemble to help brands partner and produce content with 523 participants. Ensemble shares our mission to amplify the stories of creators from underrepresented communities. Together, well empower this years 523 class of storytellers while providing brands with opportunities to collaborate directly with them.
Snap has announced a number of sponsorship opportunitie. For NBCUniversals Paris 2024 Olympic Games, Snap has partnered with the company to bring its world to the summer games. For the first time, some of Snaps popular creators like Livvy Dunne, and Harry Jowsey will be in Paris to bring new perspectives, reporting from the events in their unique voices.
There will also be new AR experiences produced by Snaps in-house AR team, Arcadia, so the Snap community can immerse themselves in NBCs coverage, as well as daily shows from NBC featuring the most exciting highlights from Paris.
Snap said it is also continuing it longstanding partnerships with the NFL, NBA, and WNBA, to provide official content across Stories and Spotlight for our community.
The company is launching the Snap Sports Network, a sports channel within Snapchat that will cover unconventional sports, like dog surfing, extreme ironing, water bottle flipping, and others.
Snap Sports Network is a new kind of content program that brands can leverage through sponsorships and product integrations. The launch partners include e.l.f. and Taco Bell, said Snap
Read the original post:
Science has an AI problem: Research group says they can fix it – Tech Xplore
Posted: at 2:42 am
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
close
AI holds the potential to help doctors find early markers of disease and policymakers to avoid decisions that lead to war. But a growing body of evidence has revealed deep flaws in how machine learning is used in science, a problem that has swept through dozens of fields and implicated thousands of erroneous papers.
Now an interdisciplinary team of 19 researchers, led by Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, has published guidelines for the responsible use of machine learning in science.
"When we graduate from traditional statistical methods to machine learning methods, there are a vastly greater number of ways to shoot oneself in the foot," said Narayanan, director of Princeton's Center for Information Technology Policy and a professor of computer science.
"If we don't have an intervention to improve our scientific standards and reporting standards when it comes to machine learning-based science, we risk not just one discipline but many different scientific disciplines rediscovering these crises one after another."
The authors say their work is an effort to stamp out this smoldering crisis of credibility that threatens to engulf nearly every corner of the research enterprise. A paper detailing their guidelines appears May 1 in the journal Science Advances.
Because machine learning has been adopted across virtually every scientific discipline, with no universal standards safeguarding the integrity of those methods, Narayanan said the current crisis, which he calls the reproducibility crisis, could become far more serious than the replication crisis that emerged in social psychology more than a decade ago.
The good news is that a simple set of best practices can help resolve this newer crisis before it gets out of hand, according to the authors, who come from computer science, mathematics, social science and health research.
"This is a systematic problem with systematic solutions," said Kapoor, a graduate student who works with Narayanan and who organized the effort to produce the new consensus-based checklist.
The checklist focuses on ensuring the integrity of research that uses machine learning. Science depends on the ability to independently reproduce results and validate claims. Otherwise, new work cannot be reliably built atop old work, and the entire enterprise collapses.
While other researchers have developed checklists that apply to discipline-specific problems, notably in medicine, the new guidelines start with the underlying methods and apply them to any quantitative discipline.
One of the main takeaways is transparency. The checklist calls on researchers to provide detailed descriptions of each machine learning model, including the code, the data used to train and test the model, the hardware specifications used to produce the results, the experimental design, the project's goals and any limitations of the study's findings.
The standards are flexible enough to accommodate a wide range of nuance, including private datasets and complex hardware configurations, according to the authors.
While the increased rigor of these new standards might slow the publication of any given study, the authors believe wide adoption of these standards would increase the overall rate of discovery and innovation, potentially by a lot.
"What we ultimately care about is the pace of scientific progress," said sociologist Emily Cantrell, one of the lead authors, who is pursuing her Ph.D. at Princeton.
"By making sure the papers that get published are of high quality and that they're a solid base for future papers to build on, that potentially then speeds up the pace of scientific progress. Focusing on scientific progress itself and not just getting papers out the door is really where our emphasis should be."
Kapoor concurred. The errors hurt. "At the collective level, it's just a major time sink," he said. That time costs money. And that money, once wasted, could have catastrophic downstream effects, limiting the kinds of science that attract funding and investment, tanking ventures that are inadvertently built on faulty science, and discouraging countless numbers of young researchers.
In working toward a consensus about what should be included in the guidelines, the authors said they aimed to strike a balance: simple enough to be widely adopted, comprehensive enough to catch as many common mistakes as possible.
They say researchers could adopt the standards to improve their own work; peer reviewers could use the checklist to assess papers; and journals could adopt the standards as a requirement for publication.
"The scientific literature, especially in applied machine learning research, is full of avoidable errors," Narayanan said. "And we want to help people. We want to keep honest people honest."
More information: Sayash Kapoor et al, REFORMS: Consensus-based Recommendations for Machine-learning-based Science, Science Advances (2024). DOI: 10.1126/sciadv.adk3452. http://www.science.org/doi/10.1126/sciadv.adk3452
Journal information: Science Advances
Here is the original post:
Science has an AI problem: Research group says they can fix it - Tech Xplore
Environmental Implications of the AI Boom | by Stephanie Kirmer | May, 2024 – Towards Data Science
Posted: at 2:42 am
Photo by ANGELA BENITO on Unsplash
Theres a core concept in machine learning that I often tell laypeople about to help clarify the philosophy behind what I do. That concept is the idea that the world changes around every machine learning model, often because of the model, so the world the model is trying to emulate and predict is always in the past, never the present or the future. The model is, in some ways, predicting the future thats how we often think of it but in many other ways, the model is actually attempting to bring us back to the past.
I like to talk about this because the philosophy around machine learning helps give us real perspective as machine learning practitioners as well as the users and subjects of machine learning. Regular readers will know I often say that machine learning is us meaning, we produce the data, do the training, and consume and apply the output of models. Models are trying to follow our instructions, using raw materials we have provided to them, and we have immense, nearly complete control over how that happens and what the consequences will be.
Another aspect of this concept that I find useful is the reminder that models are not isolated in the digital world, but in fact are heavily intertwined with the analog, physical world. After all, if your model isnt affecting the world around us, that sparks the question of why your model exists in the first place. If we really get down to it, the digital world is only separate from the physical world in a limited, artificial sense, that of how we as users/developers interact with it.
This last point is what I want to talk about today how does the physical world shape and inform machine learning, and how does ML/AI in turn affect the physical world? In my last article, I promised that I would talk about how the limitations of resources in the physical world intersect with machine learning and AI, and thats where were going.
This is probably obvious if you think about it for a moment. Theres a joke that goes around about how we can defeat the sentient robot overlords by just turning them off, or unplugging the computers. But jokes aside, this has a real kernel of truth. Those of us who work in machine learning and AI, and computing generally, have complete dependence for our industrys existence on natural resources, such as mined metals, electricity, and others. This has some commonalities with a piece I wrote last year about how human labor is required for machine learning to exist, but today were going to go a different direction and talk about two key areas that we ought to appreciate more as vital to our work mining/manufacturing and energy, mainly in the form of electricity.
If you go out looking for it, there is an abundance of research and journalism about both of these areas, not only in direct relation to AI, but relating to earlier technological booms such as cryptocurrency, which shares a great deal with AI in terms of its resource usage. Im going to give a general discussion of each area, with citations for further reading so that you can explore the details and get to the source of the scholarship. It is hard, however, to find research that takes into account the last 18 months boom in AI, so I expect that some of this research is underestimating the impact of the new technologies in the generative AI space.
What goes in to making a GPU chip? We know these chips are instrumental in the development of modern machine learning models, and Nvidia, the largest producer of these chips today, has ridden the crypto boom and AI craze to a place among the most valuable companies in existence. Their stock price went from the $130 a share at the start of 2021 to $877.35 a share in April 2024 as I write this sentence, giving them a reported market capitalization of over $2 trillion. In Q3 of 2023, they sold over 500,000 chips, for over $10 billion. Estimates put their total 2023 sales of H100s at 1.5 million, and 2024 is easily expected to beat that figure.
GPU chips involve a number of different specialty raw materials that are somewhat rare and hard to acquire, including tungsten, palladium, cobalt, and tantalum. Other elements might be easier to acquire but have significant health and safety risks, such as mercury and lead. Mining these elements and compounds has significant environmental impacts, including emissions and environmental damage to the areas where mining takes place. Even the best mining operations change the ecosystem in severe ways. This is in addition to the risk of what are called Conflict Minerals, or minerals that are mined in situations of human exploitation, child labor, or slavery. (Credit where it is due: Nvidia has been very vocal about avoiding use of such minerals, calling out the Democratic Republic of Congo in particular.)
In addition, after the raw materials are mined, all of these materials have to be processed extremely carefully to produce the tiny, highly powerful chips that run complex computations. Workers have to take on significant health risks when working with heavy metals like lead and mercury, as we know from industrial history over the last 150+ years. Nvidias chips are made largely in factories in Taiwan run by a company called Taiwan Semiconductor Manufacturing Company, or TSMC. Because Nvidia doesnt actually own or run factories, Nvidia is able to bypass criticism about manufacturing conditions or emissions, and data is difficult to come by. The power required to do this manufacturing is also not on Nvidias books. As an aside: TSMC has reached the maximum of their capacity and is working on increasing it. In parallel, NVIDIA is planning to begin working with Intel on manufacturing capacity in the coming year.
After a chip is produced, it can have a lifespan of usefulness that can be significant 35 years if maintained well however, Nvidia is constantly producing new, more powerful, more efficient chips (2 million a year is a lot!) so a chips lifespan may be limited by obsolescence as well as wear and tear. When a chip is no longer useful, it goes into the pipeline of what is called e-waste. Theoretically, many of the rare metals in a chip ought to have some recycling value, but as you might expect, chip recycling is a very specialized and challenging technological task, and only about 20% of all e-waste gets recycled, including much less complex things like phones and other hardware. The recycling process also requires workers to disassemble equipment, again coming into contact with the heavy metals and other elements that are involved in manufacturing to begin with.
If a chip is not recycled, on the other hand, it is likely dumped in a landfill or incinerated, leaching those heavy metals into the environment via water, air, or both. This happens in developing countries, and often directly affects areas where people reside.
Most research on the carbon footprint of machine learning, and its general environmental impact, has been in relation to power consumption, however. So lets take a look in that direction.
Once we have the hardware necessary to do the work, the elephant in the room with AI is definitely electricity consumption. Training large language models consumes extraordinary amounts of electricity, but serving and deploying LLMs and other advanced machine learning models is also an electricity sinkhole.
In the case of training, one research paper suggests that training GPT-3, with 175 billion parameters, runs around 1,300 megawatt hours (MWh) or 1,300,000 KWh of electricity. Contrast this with GPT-4, which uses 1.76 trillion parameters, and where the estimated power consumption of training was between 51,772,500 and 62,318,750 KWh of electricity. For context, an average American home uses just over 10,000 KWh per year. On the conservative end, then, training GPT-4 once could power almost 5,000 American homes for a year. (This is not considering all the power consumed by preliminary analyses or tests that almost certainly were required to prepare the data and get ready to train.)
Given that the power usage between GPT-3 and GPT-4 training went up approximately 40x, we have to be concerned about the future electrical consumption involved in next versions of these models, as well as the consumption for training models that generate video, image, or audio content.
Past the training process, which only needs to happen once in the life of a model, theres the rapidly growing electricity consumption of inference tasks, namely the cost of every time you ask Chat-GPT a question or try to generate a funny image with an AI tool. This power is absorbed by data centers where the models are running so that they can serve results around the globe. The International Energy Agency predicted that data centers alone would consume 1,000 terawatts in 2026, roughly the power usage of Japan.
Major players in the AI industry are clearly aware of the fact that this kind of growth in electricity consumption is unsustainable. Estimates are that data centers consume between .5% and 2% of all global electricity usage, and potentially could be 25% of US electricity usage by 2030.
Electrical infrastructure in the United States is not in good condition we are trying to add more renewable power to our grid, of course, but were deservedly not known as a country that manages our public infrastructure well. Texas residents in particular know the fragility of our electrical systems, but across the US climate change in the form of increased extreme weather conditions causes power outages at a growing rate.
Whether investments in electricity infrastructure have a chance of meeting the skyrocketing demand wrought by AI tools is still to be seen, and since government action is necessary to get there, its reasonable to be pessimistic.
In the meantime, even if we do manage to produce electricity at the necessary rates, until renewable and emission-free sources of electricity are scalable, were adding meaningfully to the carbon emissions output of the globe by using these AI tools. At a rough estimate of 0.86 pounds of carbon emissions per KWh of power, training GPT-4 output over 20,000 metric tons of carbon into the atmosphere. (In contrast, the average American emits 13 metric tons per year.)
As you might expect, Im not out here arguing that we should quit doing machine learning because the work consumes natural resources. I think that workers who make our lives possible deserve significant workplace safety precautions and compensation commensurate with the risk, and I think renewable sources of electricity should be a huge priority as we face down preventable, human caused climate change.
But I talk about all this because knowing how much our work depends upon the physical world, natural resources, and the earth should make us humbler and make us appreciate what we have. When you conduct training or inference, or use Chat-GPT or Dall-E, you are not the endpoint of the process. Your actions have downstream consequences, and its important to recognize that and make informed decisions accordingly. You might be renting seconds or hours of use of someone elses GPU, but that still uses power, and causes wear on that GPU that will eventually need to be disposed of. Part of being ethical world citizens is thinking about your choices and considering your effect on other people.
In addition, if you are interested in finding out more about the carbon footprint of your own modeling efforts, theres a tool for that: https://www.green-algorithms.org/
Read this article:
Environmental Implications of the AI Boom | by Stephanie Kirmer | May, 2024 - Towards Data Science