This enhancement proposes adding a “GPT Request/Response Transparency Panel” inside the workflow and event logs, giving users complete visibility into what information was sent to GPT and what the model returned. The goal is to make AI-driven automations easier to understand, debug, and optimize by showing the full prompt, including dynamic fields, the system instructions that were applied, and the final compiled version that was actually transmitted to the model. Alongside this, the panel would display detailed token metrics such as input tokens, output tokens, total tokens, and an estimated cost, helping users keep track of their usage and manage expenses more effectively. The raw GPT output would also be accessible in an unformatted view, along with model details and any error messages, enabling precise troubleshooting whenever something unexpected happens. This level of transparency would give users far more control over their AI interactions, support cleaner workflows, and make it easier to diagnose issues when GPT behaves differently than expected.