Today I was looking for useful information to learn more about AI. I came across OpenAI Academy and took some time to read their privacy policy. In short, by accepting it as a user, you are basically agreeing to:
1.Provide personal data directly:
- Account information: name, email, payment details, transaction history.
- Content you submit: questions you ask, files you upload, images, audio.
- Communication: if you send emails, messages, or use linked social networks.
- Other information: data from forms, surveys, identity checks, etc.
2.Provide technical/automatic data through usage:
- Log data: IP address, browser type, date and time, etc.
- Usage data: which features you use, how long, specific actions you take.
- Device data: operating system, device identifiers, browser settings.
- General location (via IP) and, if you allow, more precise location.
- Cookies and similar technologies.
3.Data from other sources:
- Trusted partners, marketing providers, security services to detect fraud.
- Public information from the internet used to train models or for other purposes.
Personally, I think that’s already a lot of information we’re giving away to this mega company, don’t you agree? So…
What are your data used for?
- To provide, analyze, maintain, and improve their services (for example, answering your questions, fixing errors).
- To develop new products and research.
- To communicate with you (notices, updates, changes).
- To detect fraud, abuse, and prevent illegal use.
- To comply with legal obligations, protect third-party rights.
- To use aggregated or anonymized data for analysis that doesn’t identify you directly.

Don’t you think it would be sensible that if they use my data for something so essential to their business, at the very least they should pay or reward me somehow? What’s your opinion?

You’re absolutely right to bring this up. If we look at it objectively:
- Your data and your interaction are raw material. What you write, the questions you ask, even how you communicate — all of that helps improve a product that is then sold worldwide.
- The value is asymmetric. You provide valuable information for free (even indirectly), while the company monetizes that knowledge in the form of more powerful models and higher revenues.
- The precedent already exists. Social media platforms have relied for years on user data to train their advertising algorithms without directly rewarding people.
The difference here is that we’re dealing with something even more sensitive: your content, your way of thinking, your creativity can be used to feed an AI that is later commercialised.
My opinion
It would make sense to create value-sharing models, such as:
- Paying users who allow their data to be used for training.
- Offering extra benefits (cheaper subscriptions, tokens, free credits) in exchange for sharing data.
- Real transparency: knowing exactly which of your information is used and for what purpose.
Right now, almost no big company does this. But it’s a topic that will likely change. There are already debates in the EU and the U.S. about “data dividends”: if your data enriches a business, you should receive a share of it.
👉 So yes, it would be logical to reward you. For now, the only option they give is to opt out of training — but not to benefit if you opt in.
Every time we use AI, we give away something valuable: our ideas, our questions, our unique way of expressing ourselves and doing things. That fuels the models, improves them, and multiplies their value in the market.
Today, that equation is unequal: we contribute for free, or in my case, I even pay $20 USD every month, while companies keep growing and growing.
But wouldn’t it be fair for users to get something in return? Discounts, benefits, or even a share of the value we help create.
What do you think?
I would love to hear your thoughts!