• The Deep View
  • Posts
  • Students use “AI Humanizer” tools to make AI text look human

Students use “AI Humanizer” tools to make AI text look human

Welcome back. AI is getting starry-eyed. Astronomers at the European Space Agency used AI to unveil 800 previously undiscovered astrophysical anomalies within the Hubble telescope archives. The astronomers, David O’Ryan and Pablo Gómez, trained a model to dig through 35 years of the telescope’s archives and pick out things that were out of the ordinary. It’s the latest example of AI being used to expedite scientific development, with researchers using AI for everything from decoding the human genome to mapping earthquakes. No wonder OpenAI is eager to create AI-powered lab partners.Nat Rubio-Licht

IN TODAY’S NEWSLETTER

1. Students use “Humanizer” tools for AI text

2. Report: SoftBank in talks for $30B OpenAI bet

3. New Gemini model levels up image understanding

CULTURE

Students use “AI Humanizer” tools to make AI text look human

There are telltale signs that AI wrote text, such as em dashes, unnatural sounding sentences, monotonous tone and feigned excitement. But what if AI could help make text sound human?

That technology exists: College students are already using AI “humanizers,” according to an NBC News report. As the name implies, these humanizers review text for traces of AI use and then suggest changes to make it look more human written. 

A quick search for “AI Humanizer” online results in endless options, including some from established companies such as Grammarly, which advertises itself as “a tool that rewrites AI-generated text—like content from Grammarly, ChatGPT, or Claude—to improve clarity, flow and readability.” Quillbot offers a similar AI Humanizer tool that can be added to Chrome for easier access. Both experiences are free, with many paid tools also available.

Though students turn to these AI humanizers to hide the fact that they were using AI in the first place, many also use it to protect themselves against wrongful accusations of AI use, NBC news reported. 

As generative AI tools became more popular, educators were met with the challenge of determining what content was student or AI generated. As a result, they turned to AI plagiarism detectors, which are notorious for incorrectly identifying whether AI was used or not and have falsely accused many students of using AI. Studies have even found that these detectors are biased against non-native English writers. 

Both educators and students in the report shared frustration with students having to prove that their work is authentic. Even if these students have never touched AI tools, they are being wrongly accused, even in some instances, for handing in high quality work.

Ultimately, the rise of AI humanizers are only a symptom to a larger problem: A cat and mouse chase in which, as AI systems become more advanced, so does paranoia about AI-generated content, discouraging students from bothering to produce good work at all. 

A more permanent solution that goes beyond AI humanizers and detectors requires educators to shift assignments and testing to reflect the AI-first era we live in, such as moving more towards in-class assignments or testing for comprehension rather than execution.

The demand for AI humanizers highlights a continued reliance on AI tools. Even in instances where using AI tools could cause negative consequences for the user, instead of pivoting away from using it or learning to use it more collaboratively,  such as for outlining essays, people are instead looking for tools that help them get away with having AI do the work. This is where AI literacy could help, as ultimately, people need to understand the negative impacts go beyond getting caught, but could cause the retrogression to their own skill developments. 

Sabrina Ortiz, Senior Reporter

TOGETHER WITH METICULOUS

Still writing tests manually?

Companies like Dropbox, Notion and LaunchDarkly have found a new testing paradigm - and they can't imagine working without it. Built by ex-Palantir engineers, Meticulous autonomously creates a continuously evolving suite of E2E UI tests that delivers near-exhaustive coverage with zero developer effort - impossible to deliver by any other means.

It works like magic in the background:

Near-exhaustive coverage on every test run
No test creation
 No maintenance (seriously)
Zero flakes (built on a deterministic browser)

MARKETS

Report: SoftBank in talks for $30B OpenAI bet

SoftBank just can’t seem to stop bankrolling OpenAI. 

On Wednesday, the Wall Street Journal reported that the Japanese conglomerate is in talks to invest up to an additional $30 billion in OpenAI. This comes as the AI firm continues its quest to garner up to $100 billion in new capital.

OpenAI is courting investors left and right for this 12-figure goal. The company is in talks with sovereign wealth funds in the Middle East to contribute around $50 billion and discussed a $10 billion investment from Amazon.

If the round goes through, OpenAI and SoftBank would break a record that they set themselves last year, when SoftBank led a $41 billion funding round in the company, announced in March and completed in December. 

  • The round would also skyrocket OpenAI’s valuation to $830 billion and widen its lead against Anthropic, which is reportedly eyeing a $350 billion valuation in its upcoming sweep of funding.

  • As it stands, SoftBank already owns roughly 11% of OpenAI, having sold off its nearly $6 billion stake in Nvidia to plunge into OpenAI. 

SoftBank also bets on OpenAI as investors flock to rival Anthropic, which reportedly doubled its upcoming funding round target from $10 billion to $20 billion amid its strategic success among enterprise clients.

It’s logical that SoftBank wants to see OpenAI flourish. Masayoshi Son, the CEO of the investment firm, holds a similar exuberance and optimism for the future of the technology as OpenAI’s Sam Altman. In an essay published to SoftBank’s website detailing its driving philosophy, Son painted a picture where developers realize “artificial super intelligence,” or AI that is “ten thousand times more intelligent than human wisdom.” It’s a far cry from the doom and gloom future that technologists like Geoffrey Hinton warn of, and even sits a bridge beyond Anthropic CEO Dario Amodei’s cautious optimism. To put it plainly, SoftBank is all in on an AI future, and is betting that OpenAI is going to be in the driver's seat. 

Nat Rubio-Licht

TOGETHER WITH YOU

Successful AI transformation starts with deeply understanding your organization’s most critical use cases. This practical guide from You.com walks through a proven framework to identify, prioritize, and document high-value AI opportunities. 

In this AI Use Case Discovery Guide, you’ll learn how to:

  • Map internal workflows and customer journeys to pinpoint where AI can drive measurable ROI

  • Ask the right questions when it comes to AI use cases

  • Align cross-functional teams and stakeholders for a unified, scalable approach

PRODUCTS

New Gemini model levels up image understanding

AI models have long prioritized text over images. Google's new agentic model changes that.

Agentic Vision in Gemini 3, unveiled Tuesday, combines visual reasoning with code execution to actively understand images. Google explains that typically AI models like Gemini take a single static glance at the world and then if they miss a detail, will compensate with a guess. Instead, Agentic Vision in Gemini 3 “treats vision as an active investigation,” according to the tech giant. 

The results speak for themselves: Gemini 3 Flash with code execution performs up to 10% better across most vision benchmarks including the MMMU Pro, Visual Probe, and OfficeQA than just Gemini 3 Flash alone.

 Here’s how it works:

  • Zooming in: Instead of just taking a single glance at an object and missing some details, Gemini 3 Flash is trained to zoom in when fine-grained details are detected. 

  • Annotating images: With Agentic Vision, the model can annotate images, going a step beyond simply describing the image but also executing code that draws directly on the image to ground reasoning. For example, Google includes a sample prompt in which a user asks Gemini how many fingers are on an image of a hand. Agentic Vision uses Python to draw boxes over every finger it identifies, then assigns it a number to produce an accurate final answer. 

  • Plotting and visual math: While standard LLMs typically hallucinate during multi-step visual arithmetic, according to Google, Agentic Vision can “parse through high-density data tables and execute Python code to visualize the findings.” his means it can analyze a data table and convert it into other mediums, such as bar charts and graphs. 

In practice, for example, Google’s model can more accurately identify the amount of objects in a picture or the small print text on an object, which can then be helpful on its own or used as context to answer broader questions or help with bigger tasks. 

Agentic Vision is currently available in the Gemini API in Google AI Studio and Vertex AI. Non-developers will also be able to access it in the Gemini app by selecting “Thinking” from the model drop-down, where it is currently rolling out.

Over the past year, AI companies have raced to improve image and video generation. OpenAI's Sora and Google's Imagen 3 and Veo produce strikingly realistic media, pushing the technology forward dramatically. But this progress has focused almost entirely on creating new content. Accurate image analysis is equally important, if not more so. Users need AI assistance with visual tasks far more often than they need to generate new images, making analysis capabilities critical for everyday applications.

LINKS

  • Model Vault: Cohere launched this fully isolated SaaS platform to help customers run Cohere models securely. 

  • Google Developer Program (GDP): The GDP subscription benefits are now integrated into Google AI Pro and Google AI Ultra subscriptions at no extra cost. 

  • Mistral Vibe 2.0: The model is now available on Le Chat Pro and Team plans, adding custom subagents, multi-choice clarifications, unified agent modes, and more. 

  • Speakly: A new AI dictation app for Mac and PC the company claims is four times quicker than typing. 

  • Microsoft Excel: A new Agent Mode is live in Excel which allows users to collaborate with Copilot without having to leave Excel.

  • Canva: Staff Research Scientist - Video & Audio Generative AI

  • Salesforce: AI Security Architect

  • Scale AI: Staff Machine Learning Research Scientist, LLM Evals

  • Anthropic: TPU Kernel Engineer

GAMES

Which image is real?

Login or Subscribe to participate in polls.

POLL RESULTS

Have you tried vibe-coding your own app, software, or webpage yet?

Yes (33%)
No (42%)
Other (25%)

The Deep View is written by Nat Rubio-Licht, Sabrina Ortiz, Jason Hiner, Faris Kojok and The Deep View crew. Please reply with any feedback.

Thanks for reading today’s edition of The Deep View! We’ll see you in the next one.

“I just can't imagine why anyone would build a staircase where the glass panels don’t meet flush along the top edge.”

“[This image] seemed to have little imperfections such as the curtain not hanging perfectly. ”

“the carpet gave it away.”

“What? The glass railing looks off on the ‘right’ image with it being behind the staircase.”

“The floor looks more real. the stair construction is more realistic.”

“I was mislead because I saw no water in the Jar in [the other image].”

Take The Deep View with you on the go! We’ve got exclusive, in-depth interviews for you on The Deep View: Conversations podcast every Tuesday morning.

If you want to get in front of an audience of 750,000+ developers, business leaders and tech enthusiasts, get in touch with us here.