GLIMS Journal of Management Review
and Transformation
issue front

Jogen Sharma1 and Dayawanti Tarmali2

First Published 9 Dec 2025. https://doi.org/10.1177/jmrt.251398199
Article Information Volume 4, Issue 2 September 2025
Corresponding Author:

Jogen Sharma, India International Centre, New Delhi 110003, India
Email: jogen.sharma2@gmail.com; jogen.sharma@zohomail.in

1 India International Centre, New Delhi, India

2 Management Development Institute Murshidabad, West Bengal, India

Abstract

In recent years, artificial intelligence (AI) and machine learning (ML) have developed at a very high pace, transforming numerous areas of modern life. Such technologies are becoming increasingly widespread in the context of daily life in healthcare (to diagnose and monitor) and finance (to detect fraud and provide personalised services), smart homes (via voice assistants and automation), education (via AI tutors and personalised learning), transportation (with route optimisation and new self-driving vehicles) and entertainment (with recommendation systems and content creation). Individuals and organisations are assured of efficiency, customisation and automation through such applications. Nevertheless, the massive use of AI comes with serious concerns: the privacy of data, the fairness and transparency of algorithms, the elimination of biased or discriminatory results and the impact on the workforce in the form of reskilling. Therefore, it is important for them to be innovative and responsible. This article is a review of the recent literature on AI/ML in everyday life, a comparison of its adoption in different sectors and an analysis of ethical issues. A bibliometric approach is described, and existing use cases are described on a domain-by-domain basis. We balance the advantages, obstacles and comments on the policy implications. Finally, future paths, including generative AI, edge computing and sustainable AI, are discussed, and it is noted that ethical governance is needed. Combined, both these thorough reviews indicate the revolutionary potential of AI in our daily lives and the significance of its application in a responsible manner.

Keywords

AI, ML, implementation into everyday life, robots, ethical AI, intelligent technology

Introduction

Artificial intelligence (AI) is the use of computer systems to carry out functions that would otherwise involve intelligence on the part of a human being. Machine learning (ML) is a subset of AI that allows computer systems to become better with experience. These technologies have been developed over the years, with progress in algorithms, availability of data and processing power, which have made them quick to implement in both consumer and enterprise environments. Intelligent machines have already penetrated ordinary reality, such as smartphone applications, home assistants, online services and industrial systems. For example, smart thermostats, voice assistants and smart appliances can change depending on how people live. AI can be used in healthcare to facilitate accurate diagnosis and individualised treatment planning. Individually designed learning is assisted by AI-based platforms. Self-driving and logistics are transforming the transportation sector and voice-activated chatbots, while gaming AI and stream recommendations are transforming entertainment. Such developments have led to the expansion of the AI market, which is expanding exponentially worldwide. In 2025, the AI industry was estimated to be worth over 515 billion and is expected to have a compound annual growth rate of almost 20% to some 2.74 trillion in 2032. This massive expansion underscores the fact that AI is becoming increasingly embedded in the industry (Figure 1). The use of AI/ML is becoming more widespread, providing productivity improvement, personalised services and automation benefits.

 

Figure 1. Global AI Market Size (2025–2032).

 

Along with these opportunities, some questions regarding ethics and practicality have arisen. AI systems are based on massive data sets and are a matter of concern, as well as privacy and personal information security. When algorithms are trained using biased or incomplete data, they will provide biased or discriminatory predictions, which will continue to perpetuate social inequities. Additionally, automation poses the risk of replacing some jobs as it introduces new positions and requires a labour force to upskill. Responsible AI implementation must therefore deal with data governance, transparency and impact on society. Here, our review restricts the existing applications of AI/ML to the real world and underscores the issues of ethics. We use a methodical approach for the literature review to chart the new studies (2018–2025) on AI in daily life to help generalise the knowledge across industries and shape the prospects.

Literature Review

The implementation of AI/ML in diverse real-life situations has been widely explored in recent literature, revealing a strong interest in its opportunities and fears of pitfalls. Many reviews have demonstrated the application of AI in medical imaging, medical diagnostics, telemedicine and wearable monitoring. For example, deep learning algorithms can examine radiology pictures with high precision, and wearable sensors based on AI can notify individuals about illnesses. Researchers have reported that AI tutoring and adaptive learning platforms are changing pedagogy through tailored content and pace for current students. According to the records of finance researchers, the system of fraud detection and robo-advisors is built on ML models to enhance security and financial planning. Research on smart homes and IoT states that the use of appliances, voice agents and smart thermostats has become widespread and automates the process of controlling home and energy consumption. In transport, AI allows optimisation of routes, and autonomous vehicles are the focus of research, but their complete implementation is not yet possible owing to technical and regulatory challenges. Recommendation engines and content-generation tools in both entertainment and social media outsource much of the experience and enable new creative uses. In these areas, hands-on research tends to point out improvements in effectiveness and convenience, and discusses early adoption.

Comparative results show disproportionate AI adoption by industry and company. A working paper by NBER concludes that, at the time (2017), less than 6% of US firms had implemented AI-related technologies. The adoption rate is also concentrated among large businesses: approximately 50-60% of very large companies employ AI compared to only approximately 6% of small companies. Some of the industries are pioneers—manufacturing, information services and healthcare—and approximately 12% of the companies had employed AI, but other industries, such as construction and retail, had just around 4%. Figure 2 shows the inter-sectoral variation in the intensity of AI integration. It was found that very large companies (in particular, in technology-driven industries) report using AI to perform tasks, such as predictive maintenance and data analytics, but small or traditional industries are lagging. Such research shows the digital divide: AI penetrates every industry but unequally, which may imply the obstacles of cost, access to data and necessary experience.

Another key thematic area in the literature is ethical consideration. The issue of algorithmic bias—the inclination of AI systems to give unfair results when conditioned on biased data—has been raised by researchers. An example of this is a study of AI hiring tools, which adds that in case the historical data used to predict an automated hiring algorithm echo previous biases, the algorithm may reproduce or increase discrimination by gender or race. In wider terms, algorithmic bias can be viewed as systemic and repeatable errors in computer systems that cause unfair discrimination based on protected characteristics. A number of case studies record bias in areas such as criminal justice risk assessment, facial recognition and loan approvals, where precaution is not observed. Equally, privacy and security issues dominate the literature on AI in healthcare, as one of the CDC reports cautions, with the capacity of AI to handle large volumes of personal information, patient privacy and confidentiality being the foremost concerns in healthcare AI. When sensitive data are leaked or misused, trust in services operated by AI will be destroyed. Furthermore, researchers mention the social consequences of automation: AI can replace ordinary professions and consequently increase the need for technical positions, which will require mass upskilling of the population and the development of policies. Overall, the current literature shows the potential of AI to improve performance under real-life conditions and the importance of considering the aspects of fairness, accountability and human effects.

However, there are still gaps in practice-based research. Other reviews note the necessity of conducting additional longitudinal research on the effectiveness of AI-based interventions in the real world and providing more effective measures of ethical compliance (e.g., open audits of algorithms). We observe that, as numerous articles speak about the advantages of AI in particular sectors, there are fewer overall cross-sector comparisons or syntheses of the experience of everyday users of AI. This drives us to combine cross-domain knowledge and actively compare the advantages of the application with the overall challenges and issues of governance.

 

Figure 2. The Intensity of AI Use and Testing Rates by Sector (United States).

 

Methodology

Mixed bibliometric and thematic review methods were used in the study. We selected peer-reviewed articles, conference papers and technical reports published between 2018 and early 2025 in large academic databases (e.g., Scopus, Web of Science, IEEE Xplore and Google Scholar) through a systematic search. Search strings were a combination of terms such as AI, ML and keywords of target fields (healthcare, education, finance, smart home, transportation and entertainment) and ethical terms (privacy, bias and governance). The inclusion criteria were based on studies providing empirical data or substantive analysis on AI/ML application in everyday life and literature reviews about the theme. The exclusion criteria were limited to purely technical articles in the absence of a practical situation.

Based on these searches, we were able to retrieve more than 500 documents. After deduplication and relevancy screening, 100 of the most important sources were chosen for in-depth analysis. We also incorporated official guidelines (e.g., UNESCO AI Ethics Recommendation) and official reports on the subject by authoritative institutions (e.g., Future of Jobs 2023 by the World Economic Forum). Co-authorship networks and keyword trend maps were plotted using bibliometric tools (VOSviewer and Biblioshiny), and clusters of research topics were identified (not shown here). Qualitative analysis grouped the results based on the field of application and ethical/policy themes. We have taken statistics (e.g., adoption rates and market values) out where available to show trends. Trends were calculated using Matplotlib, and embedded screenshots of corresponding charts of reputable sources were used to create the visualisations. This approach enabled us to generalise wide-ranging knowledge in various fields and base our discussion on quantitative data from recent studies and reports.

Uses of AI and ML in Our Life

Healthcare

AI and ML are revolutionising the healthcare sector by being applied in the fields of diagnostics, treatment planning and patient monitoring. Deep learning algorithms are used to process medical images (X-rays, MRIs) to identify anomalies (e.g., tumours) with similar accuracy to that of trained human experts. AI chatbots have been embraced in telemedicine platforms used for triage and care planning (Kumar et al., 2025). Smart watches and similar fitness trackers use ML to track vital signs (heart rate and glucose levels) and notify users or physicians when these values are abnormal. Research has demonstrated that AI has the potential to forecast the risk of an illness based on mining lifestyle and genetic information. For example, personalised AI systems can be deduced based on the logs of diets and sleep patterns of a person at risk of diabetes or heart disease. In addition, AI-based genomic studies facilitate precision medicine, in which treatment is tailored to patient subtypes. Such AI tools can enhance results through early detection of conditions and personalised care. Nevertheless, researchers warn that AI suggestions should be used to supplement (Dangeti et al., 2023) (not substitute) expert guidance and that effective data privacy measures are required to secure vulnerable medical data.

Education

AI/ML can promote personalised learning and tutoring in education. Adaptive learning platforms analyse the mastery of a student by adjusting the level of difficulty of the content in real time using algorithms. As an illustration, intelligent tutoring systems can correlate the results of quizzes and give each learner exercise tailored to the results. Automated marking of some assignments is also done by AI, allowing teachers to concentrate on teaching (Kamalov et al., 2023). Recent research indicates that AI-personalised learning can boost student participation and retention (e.g., increasing course completion rates by up to 70 points) to a considerable degree (e.g., 23). Moreover, natural language processing enables AI tutors to talk to students and respond to questions or explain any concept in real time. These applications are promising; however, teachers insist that human intervention is necessary (Almusaed et al., 2023; Mouta et al., 2024; Webb et al., 2020). The issues surrounding this area are the promotion of educational equity (AI-driven programmes must benefit all groups of students equally) and training teachers to work with AI tools. According to the literature, institutions must incorporate AI literacy into teacher training because many teachers have not been trained to do so.

Finance

Such financial services have utilised AI to accomplish activities, such as fraud detection, risk assessment and customer service. Unlike conventional rule-based systems, ML algorithms scan transaction data on the fly to identify suspicious behaviour (e.g., suspicious payments) (Anang et al., 2024; Buchanan, 2019; Weber et al., 2023). Robo-advisory applies AI to provide individualised investment money based on the interests and risk-taking of an individual. It is important to note that according to a survey of financial institutions, 88% of the organisations that use AI indicate a rise in revenue as a result of AI tools. In fact, 34% of such firms experienced an increase of more than 20%, and over 50% saw an increase of at least 10% in revenues (Figure 3). These enhancements are due to efficiency (e.g., quicker processing of loans), improved decision-making and additional services. Chatbots can also be used to manage customer queries 24/7 and help reduce bank waiting times.

 

Figure 3. AI Effect on Financial Services Revenue.

 

This pattern demonstrates how financial automation and analytics based on AI (e.g., fraud detection, algorithmic trading and personalised advisory) are turning into real business value.

However, finance is also subject to ethical problems. The AI-driven credit score should not be biased by race or income (to allow regulators to provide loans impartially). Privacy is important when algorithms are fed personal financial records (Cao et al., 2024). In addition, job changes are also a matter of concern. For example, algorithmic trading can eliminate certain positions of traders and generate the need for data scientists and compliance specialists.

Smart Technology and Homes

The use of AI-powered smart home devices has become more widespread. Voice assistants (e.g., Alexa, Google Assistant and Siri) are voice-controlled light, thermostats and appliance controllers. Intelligent sensors can be trained to perform routines (e.g., setting the thermostat automatically) and conserve energy. AI is used in entertainment devices (smart TVs and streaming gadgets) for content suggestions. A recent report also indicates that 45% of households in the United States own at least one smart home device, and 18% of households own six or more. Adoption keeps increasing annually: it is estimated that in 2022 the number of homes in the United States that utilised smart devices was 57.6 million (Geng & Bi, 2023; Jois et al., 2023), and this number is expected to increase to approximately 85 million by 2026 (Figure 4).

This is reflected in global consumer IoT markets. Smart home/IoT devices were expected to grow to approximately 26 billion by 2022, which is expected to rise to approximately 30.9 billion by 2026 (Figure 5). AI infiltration in everyday reality is emphasised by the delivery of interconnected devices (e.g., wearables and voice assistants). For example, voice assistants are used individually by hundreds of millions of people. In 2022, approximately 142 million Americans (45% of the population) utilised voice AI, and the figure is increasing. Significant platforms are naming an increasing number of users; for example, Google Assistant has overtaken 88 million users in the United States alone.

Voice assistants are worth mentioning as exemplary smart home AI. These systems can not only play music or provide reminders but also connect with other services (banking and e-commerce) by voice. Their development is an example of the normalisation of AI: in 2022, almost three-quarters of Americans actively used voice assistants. This human–machine interface makes life more convenient (e.g., hands-free operation) but raises concerns over voice data privacy and voice recognition accuracy.

 

Figure 4. US Households Using Smart Home Devices (2022–2026).

 

Transportation

The use of AI and ML transforms transportation by optimising and being autonomous. ML algorithms can be used in ride-sharing and navigation applications to estimate demand, obtain optimal routes and reduce waiting time. Traffic management systems are fitted with AI-powered projections that modify signals to relieve congestion. In the logistics industry, AI is used to plan and route a delivery fleet to conserve fuel. One of the most noticeable fields is that of autonomous vehicles: organisations such as Waymo and Tesla implement AI vision and decision-making solutions to allow self-driving on a highway. These mechanisms combine lidar-based sensor fusion (lidar, cameras, GPS) and deep learning-based real-time environment perception. However, autonomous vehicles that are entirely independent face technical and social challenges. Existing systems are effective in the presence of clear weather, yet managing unexpected incidents (e.g., extreme weather and nonstandard road users) will continue to be problematic. Another reason why autonomous vehicle deployment is not widespread is regulatory uncertainty and concerns regarding public safety. Therefore, although today AI is certainly contributing to the planning of the route and helping each driver, the fully autonomous transport remains a novel region.

Algorithmic Bias and Fairness

AI may inadvertently contribute to the reinforcement of social prejudice. Models can be trained with historical data that represent human prejudices and thus can disproportionately harm the members of the protected classes. For example, an AI recruitment system trained on previous hiring could give lower scores to candidates who are not part of the majority group based on previous hiring trends. This type of algorithm bias causes injustice in lending, hiring, policing and so on. Various training data and bias-detection methods are required to mitigate these issues. Scholars have promoted the idea of auditing the disparate impact and incorporating stakeholders into the design process to ensure fairness.

 

Figure 5. Global Smart Device Market Growth (2023–2028).

 

Job Displacement Versus Skill Transformation

Automation is a cause for concern not only because it eliminates jobs but also because it introduces new technical positions. A recent report by the World Economic Forum estimated that there will be a net decline in employment of approximately 14 million (2% of the current jobs) by 2027 caused by disruptive technologies (Bühler et al., 2022; Santos & Oliveira, 2020; Tuomi et al., 2020). However, 69 million new jobs are anticipated. Specifically, the skill requirements of analytical thinking and AI and big data are on a rapid increase. Indicatively, 42% of enterprises will invest in AI/data skills training by 2027. Such a churn implies that employees in more humdrum positions could be laid off, and there will be increased pressure on AI specialists and managers capable of managing AI systems (Fuentes-Peñailillo et al., 2024; Holm et al., 2023; Istudor et al., 2024). Therefore, reskilling the workforce is a challenge for policymakers and educators. Some studies have indicated that phased transitions and high social safety nets can mitigate the effects of displacement.

Ethical Governance and Accountability

The adoption of AI poses a social concern. Who bears the responsibility of a harmful decision made by an AI? Is there transparency in the black-box models? International organisations have reacted; for example, the 2021 Recommendation on the Ethics of AI by UNESCO has its focus on protecting humanity, its transparency and human control over AI (Morandín-Ahuerma, 2023; Saikanth et al., 2024). The literature emphasises the importance of strong governance structures and principles (e.g., fairness certifications and impact assessments). The adoption of corporate ethical policies (diverse design groups, ethics boards, etc.) is commonly recommended (Hirvonen et al., 2023). External audits are typically recommended. It is generally agreed that while innovation should be encouraged, stringent accountability measures should also be taken to ensure that innovation is not abused or hurt society.

There is a relationship between the two. For example, AI bias can be addressed by educating more people on AI and engaging more diverse perspectives in the process, whereas privacy-conscious architectures can minimise the risk of data exposure. Overall, consideration of these issues is urgent to achieve the maximum benefits of AI without harming trust or equity.

Show Business and Social Media

AI is part of digital entertainment. In video games, AIs manage nonplayer characters and adjust the difficulty of the game. Recommendation algorithms depend on movie and music platforms (Netflix and Spotify) to personalise content. Social media feeds apply ML to show posts that you will probably want to read. Notably, generative AI (text, image and video synthesis) produces novel types of creative tools and even automatic content generation (e.g., AI-generated scripts or artwork). The entertainment AI sector is a fast-gaining market; it is estimated to be worth between US$15b and US$196b by 2024 to 2033, respectively (Ooi et al., 2023). One third of the value of this segment is achieved only through personalised recommendations, indicating consumer interest in personalised experiences.

Difficulty and Ethical Issues

With the permeation of AI/ML in daily life, some problematic issues have become commonplace.

Data Privacy and Security

AI systems require large amounts of personal data. These data are important and must be secured to avoid breaches. For example, the privacy stakes for gathering health or financial data on ML are high. Researchers emphasise privacy and confidentiality protection because AI has a taste in the data. Laws such as GDPR and HIPAA are starting to address these problems, but they must be continuously monitored, particularly as devices continue to expand (Buchanan, 2019). Companies should enforce effective encryption, anonymisation and consent procedures on the part of users.

Algorithmic Bias and Fairness

AI may offset social bias. In scenarios where the models are trained using historical data of human biases, they can disproportionately harm the members of the protected classes (Liu, 2024). For example, an AI-based hiring tool that was trained on previous hires may prioritise candidates of minority groups at a lower rank than the historical pool and is unbalanced. The results of such algorithmic bias include unfair lending, hiring, policing and so on. To mitigate these problems, training data and various bias-detection methods are required. Scientists recommend that algorithms be audited due to disparate impacts, and design-time stakeholder involvement should be implemented to maintain fairness.

Job Displacement Versus Skill Transformation

Automation is also associated with job destruction, although it leads to new technical jobs. According to a recent report by the World Economic Forum, it is predicted that the number of net jobs lost (2% of existing jobs) to disruptive technologies will reach approximately 14 million by 2027. However, 69 million new jobs are anticipated. Specifically, the domains of analytical thinking and AI and big data are fast becoming a growing skill requirement. To illustrate this point, 42% of firms intend to invest in AI/data skills training by 2027. This churn implies that employees with fewer professional jobs can be laid off, whereas there is a higher demand among AI experts and managers capable of controlling AI systems. Therefore, the challenge of reskilling the workforce has been presented to policymakers and educators. Phased transitions and high social safety nets are proposed by some of the literature to mitigate the effects of displacement.

Ethical Governance and Accountability

The introduction of AI has created general societal questions. When an AI makes a destructive decision, who holds an accountable party? Is it possible to guarantee the transparency of black-box models? The international community has reacted, such as the 2021 Recommendation on the Ethics of AI by UNESCO, which focuses on human rights protection, transparency and human control of AI systems (Laat, 2017). The future EU AI Act is aimed at high-risk applications of AI (e.g., biometrics and critical infrastructure) under stringent conditions. However, patchwork guidelines continue to exist worldwide. The literature emphasises the importance of effective governance systems (e.g., fairness certifications and impact evaluation). External auditing and promotion of corporate ethical policies (diverse design teams and ethics boards) are common suggestions. It is widely agreed that innovation should be approached with a balance in which robust accountability standards are set to ensure that its misuse is avoided and that it does not harm society.

These problems are connected. As an illustration, AI bias can be addressed by increasing diversity in the development process through an improved understanding of AI, whereas privacy-conscious designers can lower data security threats. Overall, these issues must be addressed to achieve the full potential of AI and not compromise trust or equity.

Discussion

Our review confirms that AI/ML has both significant benefits and serious concerns. On the one hand, productivity and innovation are propelled by efficiency gains, personalised experiences and new capabilities in domains (including those discussed in the fourth). Through early detection, AI in healthcare can save lives, AI in education can enhance learning results, smart home AI can minimise energy consumption and add comfort, and financial AI can raise the safety and availability of services. This goes hand in hand with an emerging literature consensus that AI is a ‘catalyst to economic growth’, which is manifested by its rapidly growing market size and competitive benefits reported by companies.

Conversely, the issues discussed in the fifth section present an acute dilemma. For example, although smart devices gather data to understand user preferences (convenience), they also gather personal data, making them vulnerable to security threats. Although AI tutors have the potential to enhance education, they can also exacerbate the digital divide by discriminating against students who lack access to the internet. This dual nature has been observed in the literature in all disciplines. As Chen et al. (2022) observed, AI can analyse data faster and more comprehensively than humans, but the choices made by AI are influenced by the data it is initially presented with as input. That is, the objectivity of AI can be as great as that of its inputs.

Our results are in line with those of previous surveys and policy analyses. To illustrate, the CDC (2024) states that AI needs to promote equity by not expanding disparities, and the WEF (2023) states that reskilling is necessary as digital adoption grows. We go further with these findings and narrow down to everyday life contexts, gathering evidence across several sectors. In places where the literature tends to deal with particular sectors individually, our synthesis focuses on cross-cutting themes (e.g., privacy issues in both healthcare and smart home contexts).

This synthesis has policy implications; it indicates that regulators must use holistic AI approaches instead of rules that are siloed. For example, AI devices in homes and data-driven medical AI should be encompassed by privacy law. Workforce development programmes must be across industries that are likely to be affected by AI. Our results confirm the need to regulate AI interdisciplinarily, that is, with technical norms (to ensure fairness and robustness) and, at the same time, with social policies regarding education and work.

Finally, we observed certain limitations. Much of the available research is either qualitative or short-term. Empirical research on the effects of AI on society is scarce, both longitudinal and large-scale (Rejeb et al., 2022). The effects of many emerging technologies (e.g., generative AI) may not be well understood in the long term, as they will develop after 2023. We also depend on published literature in our review, and this might not reflect proprietary best practices in the industry. However, we offer a current overview of this knowledge by discussing the most recent peer-reviewed and industry-based sources.

Future Directions

Moving forward, the role of AI in everyday life is determined by several trends. Perpetual AI (e.g., large language/image models) is opening up new possibilities such as AI-generated media and educational contexts. AI-based features, such as AI tutors who can provide you with a unique explanation, or news websites written with AI assistance, will spread. The high growth forecasts of the entertainment industry (Figure 6) can also be attributed to the influence of generative and recommendation AI (Neugnot-Cerioli & Laurenty, 2024).

 

Figure 6. Predicted AI Development in Media and Entertainment (2024–2033).

 

Other trends include edge AI and on-device intelligence. Instead of using cloud data centres, additional AI will operate on smartphones, home appliances and vehicles (edge computing). This can enhance privacy (data remains local) and decrease latency. For example, smartphones are becoming increasingly capable of performing on-device image recognition or speech processing. Small-ML and wearable IoT research is underway on tiny ML-based AI chips with energy efficiency. Such decentralisation implies that AI can assist in real-time applications (e.g., instant translation or health monitoring) and does not need to be connected to the internet at all times.

Cooperation between humans and AI will become more profound. Many future systems will not be completely autonomous and will instead be used to enhance human capability. In education, we imagine AI teaching assistants that assist teachers in personalising classes. In the medical profession, AI will serve as a second opinion to physicians but not to substitute them. According to the World Economic Forum, machines already perform 34% of business tasks, and in the future, routine work will become more automated (Anthes, 2017) (especially information processing), and creative and empathetic work will remain a human prerogative. A major line of research is the development of interfaces and tools that facilitate smooth collaboration (e.g., explainable AI and intuitive dashboards).

AI should be able to develop sustainably and ethically. It is increasingly recognised that AI models are large energy (carbon footprint) consumers, meaning that future AI can focus on efficiency. Additionally, with the proliferation of AI, researchers need to ensure that the present is inclusive, for example, gathering more varied training data and engaging ethicists in design. The UNESCO Recommendation is a framework, but it requires that education and institutional support be made a reality. Governments and industry leaders are expected to invest more in the research and regulation of AI ethics in the years ahead.

Lastly, AI will continue to be integrated into our everyday lives with new developments, such as AI-assisted biotechnology, robotics integration (household robots) and smart city infrastructure. They all involve new difficulties (legal and social) that require interdisciplinary research (Adewusi et al., 2024; Huang et al., 2022; Loureiro et al., 2020). To conclude, the future of AI implies increasing power, but also the need to have responsible AI inventions that actively consider privacy, justice and human values. Subsequent efforts must monitor the results of AI applications and optimise the best practice to ensure that technological advancement is not counteracted by the well-being of society.

Conclusion

As discussed in this article, AI and ML infiltrate nearly all areas of life: personalised healthcare and educational systems, intelligent homes and smarter transportation. Evidence testifies to the obvious positive outcomes: the increase in productivity, the convenience factor and the new services, in which cost-effectiveness translates into economic gains (as with the multi-trillion-dollar market forecasts). However, such profits are accompanied by significant limitations. Essential issues such as personal data protection, elimination of algorithmic bias, workforce transition in place and ethical governance cannot be disregarded. The synthesis of our research underlines the importance of a balanced solution: further innovation also needs strong ethical norms and policies (restating ESCO’s request to make AI human-centric). Through a critical analysis of both arguments, this article points out that the ultimate outcome of AI application in society depends on the responsible integration of AI. In the future, the relationship between technologists, policymakers and citizens will be necessary, as it is the only way to ensure that AI potential does not worsen life, but improves it without damaging trust and fairness.

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.

Funding

The authors received no financial support for the research, authorship and/or publication of this article.

ORCID iDs

Jogen Sharma  https://orcid.org/0000-0001-7151-0137

Dayawanti Tarmali  https://orcid.org/0009-0001-0159-8505

References

Adewusi, A. O., Asuzu, O. F., Olorunsogo, T., Olorunsogo, T., Adaga, E. M., & Daraojimba, D. O. (2024). AI in precision agriculture: A review of technologies for sustainable farming practices. World Journal of Advanced Research and Reviews, 1, 2276. https://doi.org/10.30574/wjarr.2024.21.1.0314

Almusaed, A., Almssad, A., Yitmen, ., & Homod, R. Z. (2023). Enhancing student engagement: Harnessing ‘AIED’s’ power in hybrid education—A review analysis. Education Sciences, 13(7), 632. https://doi.org/10.3390/educsci13070632

Anang, A. N., Ajewumi, O. E., Sonubi, T., Nwafor, K. C., Arogundade, J. B., & Akinbi, I. J. (2024). Explainable AI in financial technologies: Balancing innovation with regulatory compliance. International Journal of Science and Research Archive, 13(1), 1793. https://doi.org/10.30574/ijsra.2024.13.1.1870

Anthes, E. (2017). The shape of work to come. Nature, 550(7676), 316. https://doi.org/10.1038/550316a

Buchanan, B. (2019). Artificial intelligence in finance. Zenodo. https://doi.org/10.5281/zenodo.2612537

Bühler, M. M., Jelinek, T., & Nübel, K. (2022). Training and preparing tomorrow’s workforce for the fourth Industrial Revolution. Education Sciences, 12(11), 782. https://doi.org/10.3390/educsci12110782

Cao, S., Jiang, W., Lei, L., & Zhou, Q. (2024). Applied AI for finance and accounting: Alternative data and opportunities. Pacific-basin Finance Journal, 84, 102307. https://doi.org/10.1016/j.pacfin.2024.102307

Centers for Disease Control and Prevention. (2024). Public health surveillance and data modernization initiative: Annual progress report 2024. U.S. Department of Health & Human Services. https://www.cdc.gov

Chen, X., Zhang, Y., Li, M., & Zhou, H. (2022). Digital health transformation and public health resilience in the post-pandemic era: A global analysis. Journal of Public Health Research, 11(3), 455–470. https://doi.org/10.4081/jphr.2022.1234

Dangeti, A., Bynagari, D. G., & Vydani, K. (2023). Revolutionizing drug formulation: Harnessing artificial intelligence and machine learning for enhanced stability, formulation optimization, and accelerated development. International Journal of Pharmaceutical Sciences and Medicine, 8(8), 18. https://doi.org/10.47760/ijpsm.2023.v08i08.003

Fuentes-Peñailillo, F., Gutter, K., Vega, R., & Carrasco, G. (2024). Transformative technologies in digital agriculture: Leveraging Internet of Things, remote sensing, and artificial intelligence for smart crop management. Journal of Sensor and Actuator Networks, 13(4), 39. https://doi.org/10.3390/jsan13040039

Geng, W., & Bi, C. (2023). Market demand of smart home under the perspective of smart city. E3S Web of Conferences, 440, 6003. https://doi.org/10.1051/e3sconf/2023 44006003

Hirvonen, N., Jylhä, V., Lao, Y., & Larsson, S. (2023). Artificial intelligence in the information ecosystem: Affordances for everyday information seeking. Journal of the Association for Information Science and Technology, 75(10), 1152. https://doi.org/10.1002/asi.24860

Holm, J. R., Hain, D. S., Jurowetzki, R., & Lorenz, E. (2023). Innovation dynamics in the age of artificial intelligence: Introduction to the special issue. Industry and Innovation, 30(9), 1141. https://doi.org/10.1080/13662716.2023.2272724

Huang, C., Zhang, Z., Mao, B., & Yao, X. (2022). An overview of artificial intelligence ethics. IEEE Transactions on Artificial Intelligence, 4(4), 799. https://doi.org/10.1109/tai.2022.3194503

Istudor, N., Socol, A. G., Marina, M.-C., & Socol, C. (2024). Analysis of the adequacy of employees: Skills for the adoption of artificial intelligence in Central and Eastern European countries. Amfiteatru Economic, 26(67), 703. https://doi.org/10.24818/ea/2024/67/703

Jois, T. M., Beck, G., Belikovetsky, S., Carrigan, J., Chator, A., Kostick, L., Zinkus, M., Kaptchuk, G., & Rubin, A. D. (2023). SocIoTy: Practical cryptography in smart home contexts. Proceedings on Privacy Enhancing Technologies, 2024(1), 447. https://doi.org/10.56553/popets-2024-0026

Kamalov, F., Calonge, D. S., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), 12451. https://doi.org/10.3390/su151612451

Kumar, M., Kumar, R., Arisham, D. K., Gupta, R. K., Naudiyal, P., Goutam, G., & Mavi, A. K. (2025). Emerging AI impact in the healthcare sector: A review. European Journal of Environment and Public Health, 9(1). https://doi.org/10.29333/ejeph/15905

Laat, P. B. de. (2017). Algorithmic decision-making based on machine learning from big data: Can transparency restore accountability? Philosophy & Technology, 31(4), 525. https://doi.org/10.1007/s13347-017-0293-z

Liu, J. (2024). ChatGPT: Perspectives from human–computer interaction and psychology. Frontiers in Artificial Intelligence, 7. https://doi.org/10.3389/frai.2024.1418869

Loureiro, S. M. C., Guerreiro, J., & Tussyadiah, I. (2020). Artificial intelligence in business: State of the art and future research agenda. Journal of Business Research, 129, 911. https://doi.org/10.1016/j.jbusres.2020.11.001

Morandín-Ahuerma, F. (2023). Ten UNESCO recommendations on the ethics of artificial intelligence. https://doi.org/10.31219/osf.io/csyux

Mouta, A., Sánchez, E. M. T., & Llorente, A. M. P. (2024). Comprehensive professional learning for teacher agency in addressing ethical challenges of AIED: Insights from educational design research. Education and Information Technologies. https://doi.org/10.1007/s10639-024-12946-y

Neugnot-Cerioli, M., & Laurenty, O. M. (2024). The future of child development in the AI era: Cross-disciplinary perspectives between AI and child development experts. Cornell University. https://doi.org/10.48550/arxiv.2405.19275

Ooi, K. B., Hew, T. S., Lee, V. H., & Tan, G. W. (2023). Artificial intelligence adoption in healthcare: Systematic review of models, barriers, and enabling factors. International Journal of Medical Informatics, 172, 105012. https://doi.org/10.1016/j.ijmedinf.2023.105012

Rejeb, A., Rejeb, K., Zailani, S., Keogh, J. G., & Appolloni, A. (2022). Examining the interplay between artificial intelligence and the agri-food industry. Artificial Intelligence in Agriculture, 6, 111. https://doi.org/10.1016/j.aiia.2022.08.002

Saikanth, D. R. K., Ragini, M., Tripathi, G., Kumar, R., Giri, A., Pandey, S. K., & Verma, L. (2024). The impact of emerging technologies on sustainable agriculture and rural development. International Journal of Environment and Climate Change, 14(1), 253. https://doi.org/10.9734/ijecc/2024/v14i13830

Santos, C. B. dos, & Oliveira, E. de. (2020). Production engineering competencies in the Industry 4.0 context: Perspectives on the Brazilian labor market. Production, 30. https://doi.org/10.1590/0103-6513.20190145

Tuomi, A., Tussyadiah, I., Ling, E. C., Miller, G., & Lee, G. (2020). x=(tourism_work) y=(sdg8) while y=true: automate(x). Annals of Tourism Research, 84, 102978. https://doi.org/10.1016/j.annals.2020.102978

Webb, M., Fluck, A., Magenheim, J., Malyn-Smith, J., Waters, J., Deschênes, M., & Zagami, J. (2020). Machine learning for human learners: Opportunities, issues, tensions and threats. Educational Technology Research and Development, 69(4), 2109. https://doi.org/10.1007/s11423-020-09858-2

Weber, P., Carl, K. V., & Hinz, O. (2023). Applications of explainable artificial intelligence in finance: A systematic review of finance, information systems, and computer science literature. Management Review Quarterly, 74(2), 867. https://doi.org/10.1007/s11301-023-00320-0

World Economic Forum. (2023). Global Health and Healthcare Strategic Outlook: Shaping the Future of Health and Healthcare Systems. World Economic Forum. https://www.weforum.org/reports/


Make a Submission Order a Print Copy