Artificial intelligence entered late February 2026 as an everyday reference point rather than a niche technical topic. Nature Materials argued that most people have encountered AI tools over the past couple of years following the release of generative AI chatbots such as ChatGPT, and it described AI as a near-daily subject in mainstream media, including coverage tied to the stock market and AI’s energy consumption. The picture that emerges is of a technology that is simultaneously normalized in public life and treated as a high-consequence system whose operational footprint and incentives are under sustained scrutiny.

Scientific publishing is one of the clearest places where AI’s normalization is turning into policy. Nature Portfolio journals, as described by Nature Materials, permit authors to use AI to edit manuscripts, while cautioning that this can improve clarity but may homogenize style. The same policy line draws sharper boundaries around transparency and responsibility: Nature Portfolio requires other uses of AI to be declared in the Methods section, including the model used and how it was trained, and it does not allow AI to be listed as an author on the grounds that AI cannot be held accountable for its output. Peer review is also treated as a sensitive boundary. Nature Portfolio journals ask peer reviewers not to upload manuscripts into generative AI tools and to declare any AI tool use in their peer-review report, reflecting concerns that confidentiality and the integrity of the review process can be compromised.