Early signals from our AI survey: use is growing, confidence is not

[ad_1]

In a recent article, we looked at how AI is showing up in conversations across the Cafepharma message boards. The takeaway was clear: people are paying attention, experimenting quietly, and asking real questions, often without much guidance.

Since then, we’ve started collecting responses to our ongoing AI survey. While it’s still early, a few signals are already emerging. One stands out in particular: AI use is happening, but confidence hasn’t caught up.

Even in these early responses, it’s clear that AI tools are already part of many people’s day-to-day work. Respondents across sales, leadership, training, and commercial roles report using tools like ChatGPT and Copilot to summarize information, organize written material, and support learning or preparation.

What’s notable is how AI is being used. Rather than functioning as an “advanced sales tool,” it’s showing up primarily as a support mechanism—a way to process complexity, save time, or get unstuck. Very little points to AI replacing judgment or human interaction. Instead, it’s being used to make sense of information that already feels overwhelming.

At the same time, confidence in using these tools effectively is uneven, and often low. Many respondents describe themselves as only somewhat confident, or not very confident at all, even when they’re already using AI in their work.

That gap between experimentation and confidence is important. It suggests that access to AI isn’t the main barrier. Understanding how to use it well, safely, and appropriately is.

The concerns people raise are also strikingly consistent. Accuracy and hallucinated information come up repeatedly, as do questions around compliance, company policy, and how AI use might be perceived by management. These aren’t abstract worries. In regulated environments such as biopharma and medtech, getting something wrong matters. Caution, in this context, is rational.

Taken together, these early responses point less to resistance than to a guidance gap. People are curious. They’re testing what AI can do. But they’re often doing so without shared standards, clear examples, or guardrails that would help them feel more confident in their approach.

That gap matters because AI tools are only going to become more common. As they do, uneven confidence leads to uneven outcomes. Some people will develop effective, low-risk ways to use these tools. Others will avoid them entirely, or use them in ways that introduce more risk than benefit.

Our AI survey remains open, and we’ll share more structured findings once a larger number of responses are in. In the meantime, we’ll be taking a closer look at how people are using AI, as well as where relatively small shifts, like asking better questions or using clearer prompts, can make a meaningful difference. These early signals will evolve as more responses come in, but they already point to a clear need for practical, role-specific guidance.

Understanding where people are today is the first step. Helping them use these tools with more confidence is the next.

Please take our survey

 

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *