Home > Media News >
Source: http://www.mashable.com
Mashable: Should we be amused or should we cry?
In the past year, two technological advances have made progress and noise: Lensa AI and ChatGPT (created by OpenAI). The former uses AI to generate images, while the latter answers questions, even if some are wrong. Recently, the discoveries made for the ChatGPT have been fun but are also astoundingly alarming.
Whether it is plagiarism (for both schools and writers), the creation of malware, or even letting the AI take control of the conversation on dating apps, people have used the software for everything under the sun. However, of the many areas that it is posing a threat to, one field that is at the utmost disadvantage is journalism.
Due to global concerns, we at Mashable tried the software to understand the areas of problems. The first and foremost is plagiarism, which means that whatever is being produced by AI can be highly biased and is entirely original. Since ChatGPT employs NLP (neuro-linguistic programming), it would be challenging to know the exact source of information as the software does not simply copy-paste the answers. It is why things get tricky; we do not know whether information is being published from trusted networks, and AI does not follow the journalistic integrity humans do.
When known journalists such as Rana Ayyub or Jamal Khashoggi publish a damning or praising report on a matter, readers are aware a human with years of experience has carefully thought and judged the situation before publishing an article. The same, however, can not be said about AI as they answer a question you pose. Mind you, even if the AI states it can’t answer a biased question. All one has to do is phrase it correctly to receive a desired response.
"The implication was clear: that tools like ChatGPT will now allow scofflaws to pollute the internet with near-infinite quantities of bot-generated garbage." https://t.co/8fArvVgNe0
— Patrick George (@bypatrickgeorge) January 23, 2023
"The implication was clear: that tools like ChatGPT will now allow scofflaws to pollute the internet with near-infinite quantities of bot-generated garbage." https://t.co/8fArvVgNe0
— Patrick George (@bypatrickgeorge) January 23, 2023
Earlier this month, when CNET came under fire for using artificial intelligence to publish articles on financial planning, it was about the issue of plagiarism and bias. While this is not a new practice — The Swaddle points out that The Associated Press and The Washinton Post have used it before — the difference is whether the readers are aware or not. CNET chose to hide the use of AI, while the former two disclosed its use. With the former, audiences lead to trust issues.
A client informed me that he will no longer pay me to write content for his website because A.I. can write it for free, but he wants to pay me a fraction of my usual rate to "rewrite it" in different words so it can pass Google's A.I. detection screening.
— Jason Colavito (@JasonColavito) January 7, 2023
Secondly, when we asked the AI to help us pitch an email to a recruiter, it gave us a magnificent response that was brilliant and chilling. It serves as a reminder that AI can easily replace writers — including journalists — with many foreshadowing several companies’ dependence on AI to churn out large copies. It will not only affect the quality but also push humans to work toe-to-toe with these technologies.
Simultaneously, a company is always about profit. It means they will choose AI over humans as the latter needs time, requires rest days, and needs better work environments to function. In another test, ChatGPT beat humans during a recruitment process, thus proving my previous point.
From where I am standing, this is an Orwellian nightmare. It took me over six years (including four years of my education) to learn how to write creatively, efficiently, and while looking at the larger picture. However, now, an AI can do my job in mere seconds, and it does tire out easily. Despite being in its nascent stages, ChatGPT proves that it can create ethical issues, one where humans stand to lose more than gain from it. If done correctly, as The Associate Press has shown us, humans and AI can work together for society’s greater good. If we follow CNET’s example, it proves humanity, collectively, will always be at a disadvantage, no matter how much we have achieved.
Right Now
23 Dec, 2024 / 07:51 AM
Dubai is one of the safest cities in the world and this tourist’s experience is proof of it
Top Stories