Artificial Intelligence (AI) is rapidly becoming an integral part of decision-making across industries, from healthcare and finance to governance and defense. As AI systems take on increasingly complex and high-stakes tasks, their lack of transparency has raised concerns about trust. Conventional wisdom suggests that explanation precedes trust – that people trust AI when they understand how it makes decisions. However, a new study challenges this assumption, arguing that trust in AI may be inevitable even when explanations are incomplete or unavailable.