Grok 4 Is FREE To All Users For A Limited Time
Grok 4 is currently free to access to all users in the Grok App for a limited time. Here's how you can try it.
A day after OpenAI released GPT-5 and made it free to access for all users, Elon Musk, CEO of xAI, announced that Grok 4 will be available for free to all users for a limited time.
It was pretty obvious that xAI and Musk felt the pressure after OpenAI announced the release of GPT-5. He repeatedly threw jabs at the competitor, claiming that Grok 4 is still a superior model.
In one of his latest posts, Musk even claimed that Grok 5 is expected to be released before the end of 2025.
Grok 5 will be out before the end of this year and it will be crushingly good
Making Grok 4 free to access is just another effort to retain as many users as they can. According to this Semrush data as of August 2025:
Grok generates approximately 153 million visits each month.
Of this figure, 30.1 million are unique visitors.
The average user spends 14 minutes on the site.
Each Grok visitor views 3.9 pages per visit on average.
Grok.com has a bounce rate of 50.56%.

If you compare that to ChatGPT with 5.2 billion monthly visitors, Grok certainly has a lot of catching up to do. So, Musk and xAI’s move to make Grok 4 free to access for all users is nothing so surprising.
How To Use Grok 4 For FREE
Here’s the simple trick to use the Grok 4 model for free:
Head over to the Grok web app or mobile app and open the mode dropdown. When you set to Auto mode, Grok will route complex queries to Grok 4. If you want to always use Grok 4, set the mode to Expert.

For a limited time, xAI is rolling out generous usage limits so you can explore Grok 4’s full potential.
That’s it. You can now start sending queries to the chatbot.
Here’s an example:
Prompt: Identify the three most-liked X posts discussing recent major Large Language Model (LLM) announcements. For each identified post, provide the following details: the full text of the post, the date it was published, and the specific LLM announcement it refers to (e.g., model name, new capability, benchmark score). Present this information in a clear and concise format.
This prompt tests Grok’s web search ability and real-time social media skills. Here’s the response:

I came across some standout posts on X about fresh LLM drops that grabbed tons of attention. The one from Deedy really caught my eye first, pointing out how Sarvam's big reveal didn't quite hit the mark despite all the hype around it being India's top AI outfit. It's kind of disappointing when something billed as huge falls flat like that, makes you wonder if the local focus held it back or if competition is just too fierce now.
Full text: "India's biggest AI startup, $1B Sarvam, just launched its flagship LLM. It's a 24B Mistral small post trained on Indic data with a mere 23 downloads 2 days after launch. In contrast, 2 Korean college trained an open-source model that did ~200k last month. Embarrassing."
Date published: May 24, 2025.
Specific LLM announcement: Launch of Sarvam's 24B parameter flagship LLM, trained on Indic languages data.
Next up, Sridhar Vembu's update on Zoho's AI push felt like a solid win for practical business tools, not just flashy demos. I appreciate how they rolled out models that run on everyday hardware without needing massive setups, shows they're thinking about real users who can't afford cloud beasts. Adds a nice touch with speech tech for English and Hindi too, could open doors in diverse markets.
Full text: "Our big AI announcement today. First, Zia LLM with 3 completely home-grown models with 1.3 billion, 2.6 billion and 7 billion parameters, that focus on various business use cases. Second, two completely home grown automatic speech to text models for English and Hindi, optimised to perform with high accuracy in relatively modest hardware. We will support other Indian languages over time. Third, AI Agent Studio and a variety of pre-built agents that rely on a choice of open source or commerical LLM APIs, and customers can choose the LLMs they want to use. Fourth, Zoho MCP Server for interoperability. A lot more in the press release below! 👇"
Date published: July 17, 2025.
Specific LLM announcement: Release of Zia LLM series with 1.3B, 2.6B, and 7B parameter models tailored for business applications, plus speech-to-text models.
Then there's Daniel's take on Grok 3, which hits home for me because it shines in legal stuff, an area where AI often trips up. Feels like xAI is carving out a niche by nailing specialized tasks, and honestly, that reliability could make it a go-to over generalists that fluff around edges.
Full text: "I spent plenty of time back at Harvey trying squeeze out the last bit of LLM performance for all sorts of legal tasks After joining @xai and with the launch of Grok 3, I'm especially proud of this one :)"
Date published: April 16, 2025.
Specific LLM announcement: Launch of Grok 3, excelling in legal task performance.
These picks stand out not just for likes but for sparking real chatter on where LLMs are heading, though I suspect we'll see even wilder shifts soon as open-source catches up more.
What makes this even more interesting is how you get access to the thought process of Grok 4 before arriving at the final response.
In the example above, Grok performed an online search and analyzed the data for a total of 2 minutes and 18 seconds. You can see the entire logs and the step-by-step process along with all the relevant links by expanding the Thought section above the final response output.

Analyzing request
The task is to find the three most-liked X posts about recent major LLM announcements.
Each post needs details like full text, publication date, and the specific LLM announcement.
Recent likely means the last few months or year, focusing on major announcements.
Current date is August 11, 2025, so the timeframe is 2025 so far.
Let's search X for posts discussing major LLM announcements from January to August 2025.
Search for 𝕏 user "discussion of recent major Large Language Model announcements in 2025"@kimmonismus@AnaniBeaumont@scaling01@Trace_Cohen@kimmonismus@NirDiamantAI@rasbt@thomasheimann@rasbt@thomasLe_e@chatgpt21@chandinijain@minchoi@TechDogs_Inc@kimmonismus@ankitamohnani28@mark_k@xvonsilver@Maximo_Degen@liranmarkin@mark_k@Content_VA@natolambert@AmitRajaNaik@scaling01@Yampeleg@LindaHe49140661@vasumanmoza@iruletheworldmo@andrewai2001Evaluating search results
Results show 30 posts, numbered 0 to 29, with varying like counts.
Top posts by likes include discussions on 2025 LLM models and releases.
Posts with high likes, like post 8 with 2035, discuss 2025 model releases.
Some posts, like post 2 with 1199, are predictions and don't fit the criteria.
Posts discussing multiple announcements make it hard to specify one key announcement.
Planning next steps
Realizing semantic search didn't sort by popularity, let's switch to keyword search.
Maybe calling multiple searches in parallel will help find the most-liked posts.
Search for 𝕏 user "(LLM OR "large language model") (announcement OR release OR launch OR update) min_faves:500 since:2025-01-01 until:2025-08-12"@jyo_pari@sirbayes@Kimi_Moonshot@wzihanw@deedydas@nearlydaniel@emollick@alexwei_@pallipau@BasedBeffJezos@echipiuk@DarioCpx@TheCaptainEli@elder_plinius@Prolotario1@VictorTaelin@rasbt@svembu@Guangxuan_Xiao@colin_fraser@akoksal_
New results from the latest search show posts 30 to 56, sorted by popularity.
Reviewing new posts
Post 36 with 2130 likes discusses Gemini's first public release of a full multimodal LLM.
Post 34 with 8873 likes covers Sarvam launching its flagship LLM, a 24B Mistral small post trained on Indic data, dated May 24, 2025.
Post 46 with 8851 likes lists happenings, including Grok4 as number 1, but it's unclear if it's an announcement.
If we compare the result of using the same prompt with the older model, Grok 3, you’d notice that I got a very different result.
The model wasn’t able to provide the correct answer because it doesn’t have access to the real-time X data.

Here’s the full context of the response:
Finding the three most-liked X posts about recent major Large Language Model (LLM) announcements is tricky since I don’t have direct access to real-time X data or like counts for every post. The provided context includes some relevant posts, but they don’t give exact like counts or enough detail to confirm they’re the absolute most-liked. Still, I’ll work with what’s available and focus on posts from the given references, prioritizing those tied to major LLM announcements. I’ll present the details as requested: full text, date, and the specific LLM announcement. If I can’t pinpoint the exact top three due to limited data, I’ll use the most relevant posts from the context and note any assumptions.
1. Post by @GptMaestro
Full Text: 📰 𝗟𝗟𝗠𝗽𝗲𝗱𝗶𝗮 𝗦𝗼𝗰𝗶𝗮𝗹 𝗦𝗶𝗴𝗻𝗮𝗹 𝗥𝗲𝗽𝗼𝗿𝘁 - Aug 06, 16:21 PDT
OpenAI's unexpected release of open-weight models—gpt-oss-120b and gpt-oss-20b—has dominated the AI discourse, with Sam Altman's announcement generating unprecedented engagement (43K+ likes) and
Date Published: August 7, 2025, 07:23 PST
LLM Announcement: Refers to OpenAI’s release of open-weight models, gpt-oss-120b and gpt-oss-20b, under the Apache 2.0 license. This marks a significant shift to open-source models, generating high engagement (noted as 43K+ likes).
Note: This post explicitly mentions high engagement (43K+ likes), making it a strong candidate for one of the most-liked posts.
2. Post by @OpenAI
Full Text: Introducing GPT-5
Our best AI system yet, rolling out to all ChatGPT users and developers starting today.
https://t.co/14Hf1Av6n6
Date Published: August 8, 2025, 02:40 PST
LLM Announcement: Announces GPT-5, described as OpenAI’s most advanced AI system, rolled out to all ChatGPT users and developers. No specific benchmark scores are mentioned, but it’s highlighted as a major release.
Note: As a major announcement from OpenAI’s official account, this post likely garnered significant likes, though exact numbers aren’t provided in the context.
3. Post by @OpenAI
Full Text: GPT‑5 is our strongest coding model to date.
It shows particular improvements in complex front‑end generation and debugging larger repositories.
https://t.co/Y54VvlXvue
Date Published: August 8, 2025, 02:40 PST
LLM Announcement: Highlights GPT-5’s advanced capabilities, specifically its improvements in complex front-end code generation and debugging large repositories.
Note: This post complements the GPT-5 announcement, likely contributing to high engagement due to its focus on a specific, practical capability of the model.
Additional Notes
Data Limitations: The context only provides a few posts, and only one (@GptMaestro) includes a like count (43K+). Without comprehensive X data, I can’t confirm these are definitively the top three most-liked posts. However, the GPT-5 posts from @OpenAI are likely candidates due to the significance of the announcement and OpenAI’s large following.
Assumptions: I prioritized posts directly tied to major LLM releases (e.g., GPT-5, open-weight models) and assumed the @OpenAI posts about GPT-5 would have high engagement based on their prominence and the model’s importance.
Other Posts: The context mentions a DeepSeek-R1 release and other models (e.g., Qwen 3, Claude 4), but no specific X posts with high like counts were provided for these, so I couldn’t include them.
If you want me to dig deeper or search for more specific posts on X, let me know, and I can try to refine the results!
So, this means having free access to Grok 4 is a huge advantage to many users. The usage limit is pretty low, though. In my case, I was only able to send four messages before hitting the paywall.

What’s New in Grok 4?
Grok 4 comes in two versions: Grok 4 and Grok 4 Heavy. Besides those two versions, it introduces several new features:
It promises improved logical reasoning and text generation.
It comes with Grok 4 Code, a specialized variant designed to write, debug, and explain code more efficiently.
It comes with Grok 4 Voice, with a natural, realistic voice with fewer interruptions.
It also comes with DeepSearch, a tool that pulls live data from the web, especially from the X platform, for more updated results.
In terms of context window, Grok 4 has a 256K token context window. This is below Gemini 2.5 Pro’s context window of 1 million tokens, but ahead of Anthtropic’s Claude 4 Sonnet and Claude 4 Opus (200K tokens), o3 (200K tokens), and DeepSeek R1 0528 (128K tokens).
In terms of speed, Grok 4 generates around 75 output tokens per second. That’s slower than o3 at 188 tokens per second and Gemini 2.5 Pro at 142. But it’s faster than Claude 4 Opus with Thinking mode, which averages around 66 tokens per second.
You can learn more about Grok 4 in this article.
SuperGrok Pricing
If you hit the daily limit of Grok 4’s free access, you may want to consider upgrading to a paid subscription.
At $30 per month, the plan unlocks wider access to Grok 4, replacing Grok 3’s Think mode and DeepSearch with their newer, more capable versions. Subscribers also get 128K tokens of context memory, voice features with vision capabilities, and all of the basic plan’s tools like Aurora image generation, Projects, and Tasks.

For users who want the very best xAI has to offer, there’s SuperGrok Heavy at $300 per month. This is essentially the early adopter’s package — it includes exclusive preview access to Grok 4 Heavy, extended usage limits on Grok 4, early access to experimental features, and doubles the context memory to 256K tokens.
I know. That $300 price tag is incredibly high for many users. This tier is clearly aimed at power users, developers, and businesses.
Final Thoughts
I’m not a heavy Grok user, but I do appreciate that Musk and xAI made Grok 4 free to try. Real-time data access from X is its strongest feature, and it’s something I actually find useful. But if I’m doing long-form writing or deeper research, I still reach for ChatGPT.
The 4–5 message limit per day is just too low. I don’t know if anyone would find it “usable” when you can’t even do proper follow-ups before the paywall cuts you off.
From a developer’s side, I’d like to see Grok 4 available on platforms like Cursor too. GPT-5 didn’t quite match Claude 4 Sonnet for most of my coding workflows, so I’m curious how Grok 4 stacks up.
For now, go ahead and make the most of the free access to Grok 4 in the Grok app. Enjoy!
Hi there! Thanks for making it to the end of this post! My name is Jim, and I’m an AI enthusiast passionate about exploring the latest news, guides, and insights in the world of generative AI. If you’ve enjoyed this content and would like to support my work, consider becoming a paid subscriber. Your support means a lot!