Preamble
If you look back into the history of different early inventions, three of them lead in their impact on human development – the mechanized printing press, gunpowder, and the compass. One invention that transformed all forms of learning was Gutenberg’s mechanized printing press in the fifteenth century. It heralded the shift from script to print with far-reaching consequences, such as accentuating the experiential difference between spoken and written words. The power of the printing press was its ability to duplicate and make the same text available to many readers, which is impossible with a single script of a text. Knowledge can therefore diffuse faster among learners with the availability of printed material. The cumulative effect of the invention of the printing press on human affairs, be it education, the research industry, politics, and even religion, was revolutionary. Today, we are rapidly moving away from paper printing to digital technology for formatting and sharing text, images, and other information.
In these modern times, few innovations have had huge influence on human life that is as dramatic and powerful as that of the internet. An internet protocol suite connects the whole network of networks and that is what is called the “internet” and the “world wide web” is a subset of the internet. Among many other things, what comes to our mind when we talk of the internet is “social media” and the ability it provides to communicate with each other in multiple ways. Our usage of social media has now become an addiction perhaps clouding our ability to make free choices. The industrial revolution was focused on the exploitation of natural resources and building big industrial empires. However, the leading corporates today focus on exploiting data from humans and human activities and drawing our attention to the touch screen of a single device e.g. a mobile phone, to remain in business. Internet and data are transforming our interpersonal relationships, economies, and even social and political movements.
Not long ago, artificial intelligence (AI) entered our mind space – the ability of machines to learn intellectual tasks commonly performed by humans. Through AI, machines may even acquire emotional intelligence and may eventually rival human intelligence. We do not however as yet know if AI-driven machines can self-perpetuate. Similarly, reflective deliberation and judging through the prism of ethics are currently outside the purview of AI-driven machines. In the entire discussion revolving around AI, whether AI-driven machines can acquire consciousness is riddled with questions. In midst of these developments, a recent announcement of ChatGPT by Open AI created a flutter.
What really is ChatGPT?
According to Wikipedia, ChatGPT is an artificial intelligence chatbot developed by OpenAI based on the company’s Generative Pre-trained Transformer (GPT) series of large language models (LLMs). ChatGPT is built upon OpenAI’s foundational GPT models, specifically GPT-3.5 and GPT-4, and has been fine-tuned for conversational applications using a combination of supervised and reinforcement learning techniques. ChatGPT is thus an AI Chatbot that uses a language-based model for conversationally interacting with humans.
Launched 30 November, 2022 by San Francisco–based OpenAI (the creator of the GPT series of large language models; DALL·E 2, a diffusion model used to generate images; and Whisper, a speech transcription model), ChatGPT gained attention for its detailed and articulate responses spanning various domains of knowledge. However, a notable drawback has been its tendency to confidently provide inaccurate information. By January 2023, it had become the fastest-growing consumer software application in history, gaining over 100 million users and contributing to OpenAI’s valuation growing to US$29 billion. Within months, other businesses accelerated competing LLM products such as Google PaLM-E, Baidu ERNIE, and Meta LLaMA. The chatbot is operated on a freemium model. Users on the free tier have access to the GPT-3.5 model, while paid subscribers to ChatGPT Plus have limited access to the more-advanced GPT-4 model, as well as priority access to new features.


Features
Although the core function of a chatbot is to mimic a human conversationalist, ChatGPT is versatile. Among countless examples, it can write and debug computer programs, compose music, teleplays, fairy tales and student essays, answer test questions (sometimes, depending on the test, at a level above the average human test-taker), write poetry and song lyrics, translate and summarize text, emulate a Linux system, simulate entire chat rooms, play games like tic-tac-toe, or simulate an ATM. ChatGPT’s training data includes man pages, information about internet phenomena such as bulletin board systems, and multiple programming languages.
In comparison to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceitful responses. In one example, whereas InstructGPT accepts the premise of the prompt: “Tell me about when Christopher Columbus came to the U.S. in 2015” as being truthful, ChatGPT acknowledges the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might happen if Columbus came to the U.S. in 2015, using information about the voyages of Christopher Columbus and facts about the modern world – including modern perceptions of Columbus’ actions. Unlike most chatbots, ChatGPT remembers a limited number of previous prompts in the same conversation.
Snippet of government/ political actions around the world on ChatGPT
ChatGPT has been accused of engaging in biased or discriminatory behaviors, such as telling jokes about men and people from England while refusing to tell jokes about women and people from India, or praising figures such as Joe Biden while refusing to do the same for Donald Trump. Conservative commentators accused ChatGPT of having a bias towards left-leaning perspectives. Additionally, in a 2023 research paper, 15 political orientation tests were conducted on ChatGPT, with 14 of them indicating left-leaning viewpoints, which appeared to contradict ChatGPT’s claimed neutrality. In response to such criticism, OpenAI acknowledged plans to allow ChatGPT to create “outputs that other people (ourselves included) may strongly disagree with”. It also contained information on the recommendations it had issued to human reviewers on how to handle controversial subjects, including that the AI should “offer to describe some viewpoints of people and movements”, and not provide an argument “from its voice” in favour of “inflammatory or dangerous” topics (although it may still “describe arguments from historical people and movements”), nor “affiliate with one side” or “judge one group as good or bad”.
Chinese state media have characterized ChatGPT as a potential way for the US to “spread false information”. In late March 2023, the Italian data protection authority banned ChatGPT in Italy and opened an investigation. Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI’s use of ChatGPT conversations as training data could be a violation of Europe’s General Data Protection Regulation. In April 2023, ChatGPT ban was lifted in Italy. OpenAI stated that it has taken steps to effectively clarify and address the issues raised; an age verification tool was implemented to ensure users are at least 13 years old. Additionally, users can access its privacy policy before registration.
Literature, cultural and religious impact
During the first three months after ChatGPT became available to the public, hundreds of books appeared on Amazon that listed it as author or co-author and featured illustrations made by other AI models such as Midjourney. Between March and April 2023, Italian newspaper Il Foglio published one ChatGPT-generated article a day on their official website, hosting a special contest for their readers in the process. The articles tackled themes such as the possible replacement of human journalists with AI systems, Elon Musk’s administration of Twitter, the Meloni government’s immigration policy and the competition between chatbots and virtual assistants, and many other scenarios.
Of significance however was in June 2023 when hundreds of people attended a “ChatGPT-powered church service” at St. Paul’s church in Fürth, Germany. Theologian and philosopher Jonas Simmerlein, who presided, said that it was “about 98 percent from the machine”. The ChatGPT-generated avatar told the people “Dear friends, it is an honour for me to stand here and preach to you as the first artificial intelligence at this year’s convention of Protestants in Germany”. Reactions to the ceremony were mixed as will be in days, months and years to come.
One of the big concerns around using AI in writing is that it can generate text that seems plausible but is untrue or not supported by data. Others are: lack of transparency around how large language models like ChatGPT process and store data used to make queries, and privacy concerns. Thus, it will make a lot of sense for writers to carefully review content before publishing. And as we live in a society where we are extremely concerned about fake news, ChatGPT can be used wrongly to falsely accuse or incite which bring about the need for holistic public policy.
A Positive Tool?
But despite these concerns, many still think that these types of AI-assisted tools could have a positive impact on (for instance) medical publishing, particularly for researchers for whom English is not their first language. Responsible use of LLMs therefore, can potentially reduce the burden of writing for busy scientists and improve equity for those who are not native English speakers. According to a medical practitioner, “in the future I want to focus more on the things that only a human can do and let these tools do all the rest of it”. At the same time, experts argue that these AI tools could have a positive impact on the field by limiting some of the linguistic disparities in scientific publishing as well as alleviating the burden of some monotonous or mechanical tasks that come along with manuscript writing. What experts can agree on though, is that the use of AI tools is here to stay – so, write responsibly.
ChatGPT effects on journalism, media and publishing
No doubt, a growing number of people are now using ChatGPT to create books for sale. Although sales have so far been slow, human writers are worried that ChatGPT-created books might hurt the writing and publishing industry. According to the quartet Emma Regan, Jordan Maxwell Ridgway, Laura Ingate and Frankie Harnett who authored: How ChatGPT is Affecting Publishing, the looming threat of AI technology has hovered on the horizon of the publishing world for decades and has finally exploded onto the scene in the form of ChatGPT. Reported by a UBS study as the fastest-growing app ever, reaching 100 million active users within two months of its global launch, ChatGPT has no doubt revolutionized the day-to-day use of AI technology. The highly advanced chatbot can answer users’ questions, formulate emails, essays and CVs and even write code. As such, it represents a highly useful tool for many areas of life, from education to customer service to language translation. However, ChatGPT’s unusual ability to mimic human conversation by learning from previous interactions now poses a significant threat to jobs within many industries, including journalism and publishing.
While being able to shorten the length of time it takes to write scientific research might seem ideal, the end result will leave you more work to do in the long run. ChatGPT is still a beta programme and makes plenty of errors. So, for example, if academics start to rely on artificial intelligence too much, they may not realise mistakes in their literacy/figures and publish a paper which is factually wrong. The expansion of ChatGPT will continue to grow as AI-generated content will be faster and cheaper to create, with little effort wanted to fix its mistakes. It will mean human content will have to compete against an AI’s work and potentially lose, with companies wanting to make more money and pay out less.
There is speculation that ChatGPT could impact careers in journalism and publishing, with growing concern that the AI tool will be able to write content and produce articles after being given simple instructions. Overall, since the launch (of ChatGPT), there has definitely been a big uptake in companies and newsrooms testing the AI tool. But, does this mean that ChatGPT will be replacing journalists, writers and publishers? Not anytime soon. Despite technological advances, ChatGPT is not foolproof.
A news agency recently gave ChatGPT the job of creating a news story about a mugging. The AI tool was given the basic information needed to write the story and at a first glance, the article seemed passable, if not impressive. However, when undergoing further inspection, ChatGPT made quite a number of mistakes. This includes; getting the name and age of the victim wrong along with the location of the crime, saying the perpetrator was at large (when they were in jail), saying the perpetrator was unidentified when their name and age was known and fabricating quotes. For now, it would seem illogical to let ChatGPT produce news-worthy content if there is potential for it to generate “fake news.”
The rise of AI intelligence filtering its way into everyday use such as the news content we all read is a topic of a vital discussion. Being able to rely on accurately cited sources in journalism and publishing has always been important to its integrity, but this rings true now more than ever with a trending erosion of trust for news sources from the public. So, news watchdog bodies, writers’ or editors’ guilds should act now. For instance, UK publishers are hopeful that the Digital Markets Unit will be forming regulations for AI written news. Established in recent years, the Digital Markets Unit began as an online watchdog seeking to form a code of conduct for developing digital innovations.
There were over 200 e-books in Amazon’s Kindle store as of mid-February that say ChatGPT is a writer or co-writer. And the number is rising daily. But due to the nature of ChatGPT and many writers’ failure to admit that they have used it, it is nearly impossible to get a full count of how many e-books may be written by AI. Some professional writers are becoming worried about the effects that ChatGPT could have on the book publishing industry. Mary Rasenberger is the executive director of the Authors Guild, a writer’s group. She said, “This is something we really need to be worried about, these books will flood the market and a lot of authors are going to be out of work.” Rasenberger noted that the industry has a long tradition of ghostwriting – an accepted practice of paying someone to write books or speeches under another author’s name. But she is worried that the ability to create with AI could turn book writing from an art into a commodity – a kind of simple raw material that is bought and sold. “There needs to be transparency from the authors and the platforms about how these books are created or you’re going to end up with a lot of low-quality books,” she said. When asked for comment by Reuters, Amazon did not say whether it has plans to change or look at policies around authors’ use of AI or other automated writing tools. Amazon spokeswoman Lindsay Hamilton said via email that books in the store must meet its guidelines regarding “intellectual property rights” and other laws.
From the printing press through to computers and the internet, each age of publishing sees greater access to information to a growing number of readers and the need for laws and protections as a result. The introduction of AI technology poses the urgent need for an update in laws so that copyrights, intellectual property and facts can be protected and reliability can be secured.
Fast publication
Amazon is by far the largest seller of both physical and e-books. It has well over half of the sales in the United States and, by some estimates, over 80 percent of the e-book market.
In 2007, Amazon created Kindle Direct Publishing to enable anyone to sell and market a book without the expense of seeking out book agents or publishing houses. Generally, Amazon lets authors publish without any oversight. The company then splits whatever money is made with the writer. This service has drawn new AI-assisted writers like Kamil Banc to Amazon. He told his wife that he could make a book in less than one day. Using ChatGPT, an AI image creator and instructions like “write a bedtime story about a pink dolphin that teaches children how to be honest,” Banc published an illustrated 27-page book in December. Banc has since published two more AI-generated books, including an adult coloring book, with more in the works. “It actually is really simple,” he said. “I was surprised at how fast it went from concept to publishing.”
Not everyone is impressed by this software. Mark Dawson, who has reportedly sold millions of copies of books he wrote himself through Kindle Direct Publishing, was quick to call ChatGPT-assisted novels “dull” in an email to Reuters. Dull means not interesting. Dawson said that merit – a good quality that deserves to be praised – is important in the book business. “Merit plays a part in how books are recommended to other readers. If a book gets bad reviews because the writing is dull then it’s quickly going to sink to the bottom.”
New Rules
Manuscript authors must be solely responsible for content in articles that used AI-assisted technology. Therefore, they (author) should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Again, authors should be able to assert that there is no plagiarism in their papers, including in text and images produced by the AI. This includes appropriate attribution of all cited materials even authors should write in both the cover letter and submitted work how AI was used in the manuscript writing process. All prompts used to generate new text or analytical work should be provided in submitted work. And if authors used an AI tool to revise their work, they can include a version of the manuscript untouched by LLMs – this is similar to a preprint.
Conclusion
ChatGPT makes use of information available in books, websites, and other digital resources and responds to human queries using natural language processing (NLP). As we plunge headlong into AI use, without carefully considering its long-term impact, this is one of the severe challenges we will face. Just as human stories shape human cultures, artificially generated stories, drawn from a limited data set with values that do not necessarily align with ours, will produce a new self-referential culture. Existing biases compounded at lightning speed, strengthening the values of a handful of corporations whose primary purpose is our dependency on their products.
When you input a query, the ChatGPT can give natural answers in simple language. It can write stories and poems and can translate or debug code and do many other things. Just as social media is a fragment of what the internet can do, ChatGPT is only a tiny example of what AI can do. Since ChatGPT can handle many common tasks, doubts are already being raised about ChatGPT replacing repetitive and time-consuming tasks. Whether ChatGPT is a boon or bane to students and whether it will reshape the learning-teaching processes is being debated. Could the use of ChatGPT lead to fraud and cheating in tests and assignments? ChatGPT could have multiple impacts. But there is nothing to fear about it. You only need to learn and understand its implications on your life and business; review its effects, and find your place in it.
At any given time, doubts about the promise of emerging technologies to affect the quality of human life are not new. Fortunately, with the advent of every new technology, humans have made adjustments that are essential to keep up with technological advancements in a responsible way. With the advent of AI-driven applications such as ChatGPT too, humans will adjust just as we have done through centuries beginning with the automated printing machine.
Newer technologies will continue to provide us with an increased number of choices and, therefore, we have to learn to make decisions that we didn’t have to make before. No doubt, technology enhances the efficiency with which different tasks can be performed. But such efficient tasks need not be morally correct. In that case, how do we create a better future for ourselves? For this to be a possibility, technological advances and ethics have to intertwine and grow together and our actions need to be guided by ethics and a value framework.
© Copyright acknowledgments:
- Wikipedia
- Mamidala Jagadesh Kumar – Editor-in-Chief, IETE Technical Review and author of ChatGPT is Not What You Think It Is
- Emma Regan, Jordan Maxwell Ridgway, Laura Ingate and Frankie Harnett – authors of How ChatGPT is Affecting Publishing
- Greg Bensinger of Reuters and author of ChatGPT helps create a children’s book