Voters using Voting Advice Applications (VAAs) often struggle with comprehension, leading to satisficing behavior. Conversational Agent VAAs (CAVAAs) aim to improve understanding by allowing users to ask a chatbot comprehension questions. While early research shows positive effects, studies have only compared CAVAAs to basic VAAs, not to enhanced VAAs (VAA+s) with clickable information buttons. Current study compares four versions: a basic VAA, a VAA+, a CAVAA, and a CAVAA+, tested on desktop (Study 1) and smartphone (Study 2). Results show that all three information-rich versions reduce non-directional answers and positively impact political and tool evaluation measures, suggesting that added information, regardless of format, improves response quality and user experience. Interestingly, users request more information in the enhanced web-based VAA+ than via the chatbot. We discuss these and other findings in the paper.