Abstract: The literature on name-based biases in hiring suggests pervasive discrimination, as White-sounding names receive more callbacks than Black-sounding ones. This study assesses whether AI systems, through open Large Language Models (LLMs), exhibit similar biases. The LLMs evaluated résumés on attributes such as competence and warmth, aggregating these dimensions into composite scores for each résumé. The different names attached to a résumé led to changes in evaluation, despite identical content. Statistically significant race and gender biases were found in most models for warmth and competence ratings. Unlike typical settings, Black applicants and female names were rated slightly higher through the LLMs’ evaluations. These findings highlight the importance of examining AI tools used in hiring as they may unintentionally reflect societal biases.