Open AI vs Develop Diverse: answers to the most common questions
Insights from our data team
If you’re like most of our other users, then you’ve probably at some point wondered: “How far can I go with ChatGPT when writing inclusive job ads?”
It’s one of our most frequent questions, and the answer might surprise you because not all AI works the same way. The difference between using a generic chatbot and purpose-built technology could mean the gap between consistently bias-free content and a gamble on whether today’s AI output happens to be accurate.
So before you copy-paste your next job ad into ChatGPT, let’s break down what’s really happening under the hood and answer the most common questions we get on the topic.
Questions & Answers
Can I use AI to make my job ad inclusive and bias-free?
The answer is both yes and no. Yes, because Develop Diverse uses one type of AI to do this. No, because in many cases, AI, such as the generative large language models are highly unreliable. Let’s take a closer look at the types of interaction with AI we’ve had questions about.
A prompt is the message you send to, eg, ChatGPT or Copilot. You ask a question or tell it how to write the text you need.
A custom GPT is set up to always follow specific instructions on top of the prompt you send. Like knowing your brand voice when you want to write an email. In this way, you don’t have to repeat your brand voice guidelines for each prompt you send.
Fine-tuning a model involves annotating a dataset and adding it to the model. Fine-tuning can adjust the model’s behaviour to fit your needs better.
What's the difference between generative AI tools like ChatGPT and Develop Diverse's approach to detecting bias in text?
Prompts and custom GPTs are all large generative language models, meaning they can write text for you. Whether you are using ChatGPT, Copilot, or a third generative language model, they come with the default setting to never give you the same answer to the same question. They are statistical models trained on texts available on the internet. They write one token (a few characters) after another based on probability, not on knowledge or insight, yet the answers always sound plausible. Sometimes the correct answer is also the most probable one, and you were lucky. Sometimes you are unlucky. But you will always have to fact-check.
At Develop Diverse, we don’t rely on generative AI to detect biases; we fine-tuned a model. This solution has the advantage that the model analyzes a given text consistently, providing stability. Linguists specializing in language psychology and inclusive language have trained the model to be of high quality, ensuring reliability. The disadvantage is that it takes a lot of manual labor.
Can I practice prompting and get a bias-free job ad?
No. The large language models do not know all the research of language psychology, and the studies on candidates’ reactions to certain words and concepts in job ads. Even if it did, there is no guarantee that it could perform qualitative analyses and interpretations of texts in accordance with a research-based framework in a reliable and consistent way.
You can tweak your prompt and maybe get a few lucky hits. But it will always be a matter of luck.
Can I make a custom GPT to detect and write a bias-free job ad?
No. A custom GPT will not be able to write you a bias-free job ad. Your main problem will be unreliable output. Having a gender inclusive job ad will increase the number of applicants by 52%*, and you can get there with just a few tweaks to your ad. But you need to be very precise when making changes to your text. GPTs are good at creative thinking at a higher level, but they repeatedly fail to follow exact, detailed writing instructions.
Your next problem would be to verify the output. Most biases are subconscious and are naturally not obvious for most of us, and you would have no way of knowing if the text is, in fact, bias-free.
To do highly specialized tasks, you will always need a subject matter expert. The large language models are designed to generate plausible-sounding answers, and that is what you are going to get. Not verified, not correct, not citing a real reference – only plausible sounding answers.
*The Data Guide to Better Job Ads, Develop Diverse, 2025
Can I make a custom GPT an expert on inclusive language?
The large language models are trained on human data. Since humans are naturally biased, the models are too. They are even biased when it comes to inclusive and bias-free language. If you instruct a GPT to write inclusively, it will often write a very warm text emphasizing close relationships and other communal values. The warmth could be against your brand voice, and it’s also against our recommendations, as a warmer feel tends to discourage men and neurodivergent people from applying to your job ad.
Another thing that a generative model could add is a made-up DEIB statement. The AI-generated DEIB statements tend to be generic and vague, which negatively affects potential applicants’ decision to apply. The effects are even more negative than if you do not have a DEIB statement at all*. If you include such a statement in your ad, make sure to mention concrete initiatives and activities you have in place to ensure inclusion and diversity in the workplace.
*Heath, Carlsson, Agerström. “What adds to job ads? The impact of equality and diversity information on organizational attraction in minority and majority ethnic groups”, in Journal of Occupational and Organizational Psychology (2023)
Can I train a language model to detect biased language in my job ads?
Yes. That is what Develop Diverse did. For that, you need subject matter experts, in this case, linguists who specialize in language psychology and qualitative analysis of bias and discrimination in written language. These subject matter experts fine-tune a language model. It is a long process, but it gives you a reliable output in more than one way. First, the model will always analyse a given text the same way twice. Second, you can trust that experts have verified the output. Third, it is controllable. It’s easy to update the model with the newest research findings, and we can fix potential errors. For more details, read about our process behind Develop Diverse.