As perianesthesia nurses, we are committed to providing the highest quality care to our patients as they undergo procedures and recover from anesthesia. With advances in technology, we have more tools than ever to help us achieve this goal, and one of the most exciting new tools is the development of artificial intelligence (AI) language models like ChatGPT.
ChatGPT is an AI language model developed by OpenAI, which has been trained on a vast amount of text data to understand and respond to human language. This technology has many potential applications in health care, including in perianesthesia nursing.
One of the key benefits of ChatGPT is its ability to provide instant, accurate, and personalized responses to a wide range of health care questions. Nurses can use ChatGPT to quickly find information on medications, procedures, and patient care, as well as to communicate with patients and colleagues in real-time. This can save valuable time and help ensure that patients receive the best possible care.
In addition, ChatGPT can help nurses stay up-to-date on the latest research and best practices in perianesthesia nursing. The AI language model can quickly search and analyze a vast amount of literature, providing nurses with the most relevant and up-to-date information on a given topic. This can help nurses make informed decisions about patient care and ensure that they are providing the most effective treatments.
Of course, it is important to remember that ChatGPT is a tool, not a replacement for human nurses. While the technology can help nurses provide better care, it cannot replace the human touch and compassion that are so essential to our profession. Nurses must continue to provide holistic, patient-centered care and use ChatGPT as a supplement to their clinical knowledge and expertise.
In conclusion, ChatGPT has the potential to be a valuable tool for perianesthesia nurses, providing quick, accurate, and personalized responses to a wide range of health care questions. As we continue to explore the possibilities of AI in health care, it is important to embrace these new technologies and use them to enhance, rather than replace, the vital work of perianesthesia nurses.
The entire editorial that you just read above, including the title, was written by ChatGPT. My question to ChatGPT was “Can you write an editorial for a perianesthesia nursing journal on the topic of ChatGPT?” Mind blown. I was mesmerized by the AI and continued to ask questions. Do you know about perianesthesia nursing, I asked? “Yes, I have information on perianesthesia nursing. Perianesthesia nursing refers to the specialty area of nursing that deals with the care of patients before, during, and after anesthesia and surgical procedures. The primary focus of perianesthesia nurses is to ensure the safety and well-being of patients, as well as monitor their vital signs and manage any complications that may arise. They work closely with anesthesiologists, surgeons, and other health care professionals to provide comprehensive care for patients undergoing procedures that require anesthesia.” Mind blown again.
ChatGPT (the GPT stands for generative pretrained transformer) was software programmed to simulate human conversation. It was made available on Open AI's Web site in November 2022 and can be tested by the public. ChatGPT can follow instructions, answer questions, and write college-level essays.
1Here's what to know about OpenAI's ChatGPT—what it's disrupting and how to use it.
As a university professor who teaches a research and required writing class, this causes me to shiver. As an editor, the fact that ChatGPT can write research papers is even more concerning. ChatGPT and other AI language models do have limitations and sometimes give incorrect answers. It incorrectly attributed my book,
Drain's Perianesthesia Nursing: A Critical Care Approach, to another person, but when I pointed it out, agreed that was incorrect. ChatGPT also said that I worked at a different university than I do, but then thanked me when I sent the correct information. Flanagin et al
2- Flanagin A
- Bibbins-Domingo K
- Berkwits M
- et al.
Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.
caution that the model is not yet ready to be used a source of reliable information.
To chat with this form of AI is fascinating and gives us a glimpse of the future. We use other forms of AI. When
JoPAN’s Editorial Manager system uses iThenticate for a submission to look for similarities with other papers, that is a form of AI. However, not only have students used this form of AI to write essays, but it has also passed law exams and a medical licensing test.
3- Gilson A
- Safranek C
- Huang T.
How does ChatGPT perform on the medical licensing exams? The implications of large language models for medical education and knowledge assessment. Preprint. Posted December 26, 2022.
,4ChatGPT passes exams from law and business schools.
To combat inappropriate use of ChatGPT, AI Open has developed a new feature, AI text classifier, to assist in determining if the essay or paper was written by a human or ChatGPT. However, they note that this feature is imperfect and should be used with caution.
5How ChatGPT could embed a ‘Watermark’ in the text it generates.
Open AI is also beginning some work on a watermark (use of words from a special list) for any work by a completed AI.
5How ChatGPT could embed a ‘Watermark’ in the text it generates.
Authors of an editorial in the
Journal of the American Medical Association (JAMA) and
JAMA Network are concerned about AI use in scientific writing and in an editorial advised against using ChatGPT as a coauthor in a paper.
2- Flanagin A
- Bibbins-Domingo K
- Berkwits M
- et al.
Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.
This
JAMA editorial occurred after
Nature reported two preprints and two articles that had published with ChatGPT included as a bylined author.
6Tools such as ChatGPT threaten transparent science; here are our ground rules for their use.
Other organizations are developing policies that typically ban the inclusion of AI work as an author and requires full transparency of its use.
2- Flanagin A
- Bibbins-Domingo K
- Berkwits M
- et al.
Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.
,7Zielinski C, Winker M, Aggarwal R, et al. WAME Board. Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on ChatGPT and Chatbots in relation to scholarly publications. January 20, 2023. Accessed February 17, 2023. https://wame.org/page3.php?id=106
Elsevier, the publisher of
JoPAN, has developed policies approved by these editors that address the use of AI in our publication. Briefly, the policy states that authors should:
- •
Use AI and AI-assisted technologies only to improve readability and language of the work and not to replace key researcher tasks;
- •
Apply the technology with human oversight and control;
- •
Disclose the use of AI and AI-assisted technologies in their manuscript; and
- •
Not list AI and AI-assisted technologies as an author or coauthor or cite AI as an author.
You can read the entire policy in
Box 1.
Box 1: The use and declaration of AI and AI-assisted technologies in scientific writing
Where authors use artificial intelligence (AI) and AI-assisted technologies in the writing process, authors should:
- •
Only use these technologies to improve readability and language, not to replace key researcher tasks such as interpreting data or drawing scientific conclusions.
- •
Apply the technology with human oversight and control, and carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased.
- •
Not list AI and AI-assisted technologies as an author or coauthor or cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier's AI policy for authors.
- •
Disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.
In conclusion, AI language models provide possibilities and opportunities.
2- Flanagin A
- Bibbins-Domingo K
- Berkwits M
- et al.
Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.
However, risks are also involved in any new technology.
2- Flanagin A
- Bibbins-Domingo K
- Berkwits M
- et al.
Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.
The reality is that the development of AI language models will be advancing quickly. In terms of publication, the important take-home is that tools used for creation of information should be used appropriately and transparently. Watch for the June issue of
JoPAN when Informatics and Health Information Techology columnist, Matt Byrne, discusses the health care aspects of AI language models.
Declaration of AI and AI-Assisted Technologies in the Writing Process
During the preparation of this work the author used ChatGPT to show the capability of AI in writing an editorial. After using this tool, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.
References
Here's what to know about OpenAI's ChatGPT—what it's disrupting and how to use it.
Forbes. December 7, 2022; ()- Flanagin A
- Bibbins-Domingo K
- Berkwits M
- et al.
Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge.
JAMA. 2023; ()https://doi.org/10.1001/jama.2023.1344- Gilson A
- Safranek C
- Huang T.
How does ChatGPT perform on the medical licensing exams? The implications of large language models for medical education and knowledge assessment. Preprint. Posted December 26, 2022.
medRxiv. 2022; https://doi.org/10.1101/2022.12.23.22283901ChatGPT passes exams from law and business schools.
CNN Business. January 26, 2023; ()How ChatGPT could embed a ‘Watermark’ in the text it generates.
New York Times. February 17, 2023; ()Tools such as ChatGPT threaten transparent science; here are our ground rules for their use.
Nature. 2023; 613: 612https://doi.org/10.1038/d41586-023-00191-1Zielinski C, Winker M, Aggarwal R, et al. WAME Board. Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on ChatGPT and Chatbots in relation to scholarly publications. January 20, 2023. Accessed February 17, 2023. https://wame.org/page3.php?id=106
Article info
Footnotes
The ideas or opinions expressed in this editorial are those solely of the author and do not necessarily reflect the opinions of ASPAN, the Journal, or the Publisher.
Conflict of Interest: None to report.
Copyright
© 2023 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.