New report from Infosys and MIT Technology Review Insights shows that trust, transparency and a ‘safe to fail’ culture are essential for scaling AI initiatives New report from Infosys and MIT Technology Review Insights shows that trust, transparency and a ‘safe to fail’ culture are essential for scaling AI initiatives

Infosys and MIT Technology Review Insights Report Reveals the Critical Role of Psychological Safety in Driving AI Initiatives — with 83% of Business Leaders Reporting a Measurable Impact

New report from Infosys and MIT Technology Review Insights shows that trust, transparency and a ‘safe to fail’ culture are essential for scaling AI initiatives across global organizations

CAMBRIDGE, Mass. and BENGALURU, India, Dec. 16, 2025 /PRNewswire/ — A new global report by Infosys (NSE, BSE, NYSE: INFY) and MIT Technology Review Insights reveals that 83 percent of business leaders believe psychological safety directly impacts the success of enterprise AI initiatives. Creating psychological safety in an era of AI takes more than good intentions or blanket HR policies, it requires explicit messaging about AI’s realistic capabilities, limits and approved use cases. Through its collaboration with MIT Technology Review Insights, Infosys aims to equip global leaders with insights and strategies to adopt AI responsibly at scale, leveraging Infosys Topaz, an AI-first suite of services, solutions and platforms.  

The report, “Creating Psychological Safety in the AI Era,” highlights how employees often hesitate to experiment, challenge assumptions or lead projects due to fear of backlash, which undermines innovation even when the technology capabilities exist. The report shows that despite major investments in AI, workplace fear – particularly fear of failure – remains one of the biggest barriers to adoption.

Despite rapid advances in AI technology, the report finds that human factors are holding enterprises back. Fear of failure, unclear communication and limited leadership openness often prevent employees from fully engaging with AI initiatives. In fact, organizations may have the tools and strategies in place, but without psychological safety, adoption falters. The findings highlight that scaling AI is as much about building trust and resilience within the workforce as it is about deploying cutting-edge systems. 

The report’s key findings include:

  • A culture of psychological safety has greater success with AI projects. More than four out of five (83 percent) respondents say psychological safety has a measurable impact on the success of AI initiatives, and 84 percent report direct links between psychological safety and tangible business outcomes.
  • Fear is holding leaders back. While nearly one-quarter (22 percent) of respondents admit they have hesitated to lead or suggest an AI project because of fear of failure or potential criticism, encouragingly three-quarters (73 percent) indicated they feel safe to provide honest feedback and express opinions freely in the workplace.
  • Achieving psychological safety is a moving target. Fewer than half (39 percent) of respondents describe their current level of psychological safety as “high,” – yet 48 percent report a “moderate” degree of it. This highlights a gap where some enterprises are pursuing AI adoption on cultural foundations that are not yet fully stable.
  • Communication and leadership behaviors are critical levers. 60 percent of respondents say clarity on how AI will – and won’t – impact jobs would improve psychological safety the most, while just over half (51 percent) highlight leadership modeling openness to questions, dissent and failure as equally important.
  • Creating psychological safety takes more than good intentions or HR policies. It requires explicit messaging about AI’s realistic capabilities, limits and approved use cases. Clear communication and ongoing dialogue help companies prioritize transparency, ethics and stakeholder engagement.

Laurel Ruma, Global Editorial Director, MIT Technology Review Insights said, “Our research, in collaboration with Infosys, shows that psychological safety is not a soft metric, it is a measurable driver of AI outcomes. Leaders who communicate clearly about AI’s impact and model openness to questions and dissent create the conditions for innovation. Without that foundation of trust, even the most advanced AI strategies will falter.”

Rafee Tarafdar, Chief Technology Officer, Infosys said, “We’ve observed that the most successful enterprise AI transformations happen in organizations that foster psychological safety. When employees feel empowered to experiment without fear of failure, innovation thrives. This culture of trust and openness enables teams to unlock the full potential of AI, driving meaningful business outcomes and sustainable growth.”

Sushanth Tharappan, Executive Vice President – HR, Infosys, “At Infosys, we’ve built a culture of innovation where employees are constantly looking for new opportunities to innovate with AI. We’ve seen firsthand how psychological safety accelerates adoption and when employees have safe spaces to experiment and reimagine roles, it streamlines the technological aspect. This report confirms that enterprises must pair technical investment with cultural transformation if they want AI to deliver lasting impact.”

The report underscores that AI transformation is not only a technological journey, but also a cultural one. By prioritizing psychological safety, enterprises can create trust, resilience and openness which are needed to unlock the full potential of AI.

About MIT Technology Review Insights

MIT Technology Review Insights is the custom publishing division of MIT Technology Review, the world’s longest-running technology magazine, backed by the world’s foremost technology institution—producing live events and research on the leading technology and business challenges of the day. Insights conducts qualitative and quantitative research and analysis in the U.S. and abroad and publishes a wide variety of content, including articles, reports, infographics, videos, and podcasts.

About Infosys

Infosys is a global leader in next-generation digital services and consulting. Over 320,000 of our people work to amplify human potential and create the next opportunity for people, businesses, and communities. We enable clients in 59 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer clients, as they navigate their digital transformation powered by cloud and AI. We enable them with an AI-first core, empower the business with agile digital at scale and drive continuous improvement with always-on learning through the transfer of digital skills, expertise, and ideas from our innovation ecosystem. We are deeply committed to being a well-governed, environmentally sustainable organization where diverse talent thrives in an inclusive workplace.  

Visit www.infosys.com to see how Infosys (NSE, BSE, NYSE: INFY) can help your enterprise navigate your next. 

Safe Harbor

Certain statements in this release concerning our future growth prospects, or our future financial or operating performance, are forward-looking statements intended to qualify for the ‘safe harbor’ under the Private Securities Litigation Reform Act of 1995, which involve a number of risks and uncertainties that could cause actual results or outcomes to differ materially from those in such forward-looking statements. The risks and uncertainties relating to these statements include, but are not limited to, risks and uncertainties regarding the execution of our business strategy, increased competition for talent, our ability to attract and retain personnel, increase in wages, investments to reskill our employees, our ability to effectively implement a hybrid work model, economic uncertainties and geo-political situations, technological disruptions and innovations such as artificial intelligence (“AI”), generative AI, the complex and evolving regulatory landscape including immigration regulation changes, our ESG vision, our capital allocation policy and expectations concerning our market position, future operations, margins, profitability, liquidity, capital resources, our corporate actions including acquisitions, and cybersecurity matters. Important factors that may cause actual results or outcomes to differ from those implied by the forwardlooking statements are discussed in more detail in our US Securities and Exchange Commission filings including our Annual Report on Form 20-F for the fiscal year ended March 31, 2025. These filings are available at www.sec.gov. Infosys may, from time to time, make additional written and oral forward-looking statements, including statements contained in the Company’s filings with the Securities and Exchange Commission and our reports to shareholders. The Company does not undertake to update any forwardlooking statements that may be made from time to time by or on behalf of the Company unless it is required by law. 

Logo – https://mma.prnewswire.com/media/633365/5460444/Infosys_Logo.jpg

Cision View original content:https://www.prnewswire.com/news-releases/infosys-and-mit-technology-review-insights-report-reveals-the-critical-role-of-psychological-safety-in-driving-ai-initiatives–with-83-of-business-leaders-reporting-a-measurable-impact-302643644.html

SOURCE Infosys

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03722
$0.03722$0.03722
-2.76%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Visa Expands USDC Stablecoin Settlement For US Banks

Visa Expands USDC Stablecoin Settlement For US Banks

The post Visa Expands USDC Stablecoin Settlement For US Banks appeared on BitcoinEthereumNews.com. Visa Expands USDC Stablecoin Settlement For US Banks
Paylaş
BitcoinEthereumNews2025/12/17 15:23
Nasdaq Company Adds 7,500 BTC in Bold Treasury Move

Nasdaq Company Adds 7,500 BTC in Bold Treasury Move

The live-streaming and e-commerce company has struck a deal to acquire 7,500 BTC, instantly becoming one of the largest public […] The post Nasdaq Company Adds 7,500 BTC in Bold Treasury Move appeared first on Coindoo.
Paylaş
Coindoo2025/09/18 02:15
Curve Finance votes on revenue-sharing model for CRV holders

Curve Finance votes on revenue-sharing model for CRV holders

The post Curve Finance votes on revenue-sharing model for CRV holders appeared on BitcoinEthereumNews.com. Curve Finance has proposed a new protocol called Yield Basis that would share revenue directly with CRV holders, marking a shift from one-off incentives to sustainable income. Summary Curve Finance has put forward a revenue-sharing protocol to give CRV holders sustainable income beyond emissions and fees. The plan would mint $60M in crvUSD to seed three Bitcoin liquidity pools (WBTC, cbBTC, tBTC), with 35–65% of revenue distributed to veCRV stakers. The DAO vote runs from up to Sept. 24, with the proposal seen as a major step to strengthen CRV tokenomics after past liquidity and governance challenges. Curve Finance founder Michael Egorov has introduced a proposal to give CRV token holders a more direct way to earn income, launching a system called Yield Basis that aims to turn the governance token into a sustainable, yield-bearing asset.  The proposal has been published on the Curve DAO (CRV) governance forum, with voting open until Sept. 24. A new model for CRV rewards Yield Basis is designed to distribute transparent and consistent returns to CRV holders who lock their tokens for veCRV governance rights. Unlike past incentive programs, which relied heavily on airdrops and emissions, the protocol channels income from Bitcoin-focused liquidity pools directly back to token holders. To start, Curve would mint $60 million worth of crvUSD, its over-collateralized stablecoin, with proceeds allocated across three pools — WBTC, cbBTC, and tBTC — each capped at $10 million. 25% of Yield Basis tokens would be reserved for the Curve ecosystem, and between 35% and 65% of Yield Basis’s revenue would be given to veCRV holders. By emphasizing Bitcoin (BTC) liquidity and offering yields without the short-term loss risks associated with automated market makers, the protocol hopes to draw in professional traders and institutions. Context and potential impact on Curve Finance The proposal comes as Curve continues to modify…
Paylaş
BitcoinEthereumNews2025/09/18 14:37