Generative AI

Avoiding Plagiarism

What is plagiarism?

 

The word “plagiarism” evokes a shudder in most, and rightly so: it has been and continues to be a problem in all fields, including publishing, the media, politics, and academia. The NPS Academic Honor Code defines “plagiarism” as:

"the use of words, information, insights, or ideas of another without crediting that person through proper citation. Unintentional plagiarism, or sloppy scholarship, is academically unacceptable; intentional plagiarism is dishonorable. You can avoid plagiarism by fully and openly crediting all sources used."

Learn more: Graduate Writing Center--Citations/Avoiding Plagiarism

Pros & Cons of Using Generative AI

Source: University of South Florida Libraries

Potential Issues of Using Generative AI

Reliability and Transparency: AI tools, including language models and chatbots, may not generate truthful or reliable answers and often lack the ability to credit the sources of their information, leading to challenges in verifying the accuracy and origin of their responses.

Bias and Equity: AI systems can reflect and perpetuate human biases, as they are trained on human-generated data. This also impacts equity, as the dominance of languages like English in AI models can marginalize Indigenous languages and cultures.

Privacy and Data Security: Concerns about how AI platforms use and store private data, along with the security of this data in critical and sensitive sectors, are major issues.

Intellectual Property and Ethical Considerations: The use of existing content by AI raises intellectual property concerns, and there's a risk of AI being used for unethical purposes, such as creating deepfakes or spreading misinformation.

Access Inequality and Impact on Employment: The requirement for payment to access quality AI tools creates a divide based on affordability, and the potential for AI to automate jobs poses challenges to employment across various sectors.

Environmental Impact and Over-reliance: The energy-intensive nature of training AI models raises environmental concerns, and over-reliance on AI tools could lead to the degradation of human skills.

Regulatory and Legal Challenges: The rapid advancement of AI presents challenges in developing adequate legal and regulatory frameworks to ensure accountability, consumer protection, and fair competition.


Potential Benefits of Using Generative AI

Efficiency, Productivity, and Innovation: AI enhances productivity by automating routine tasks and complex computations, and stimulates innovation in various fields like art, design, and problem-solving.

Personalization and Improved User Experience: AI's capability to tailor experiences to individual preferences enhances engagement in marketing, entertainment, and education.

Advanced Data Analysis and Decision Making: AI's vast data analysis capacity aids in uncovering insights for informed decision-making in healthcare, finance, and scientific research.

Language Services and Global Communication: AI-driven translation services facilitate communication across language barriers, beneficial in international business and diplomacy.

Accessibility and Healthcare Advancements: AI supports individuals with disabilities and advances healthcare through improved diagnostics, treatment personalization, and aiding in surgeries.

Educational Tools and Environmental Monitoring: AI enhances learning with personalized education and interactive tools, and contributes to environmental sustainability through efficient resource management and climate change predictions.

Business Insights and Market Analysis: AI provides businesses with market analysis, consumer behavior predictions, and trend forecasting, aiding in strategic planning.

Safety and Security Enhancements: AI improves cybersecurity, public safety, and disaster response through threat identification, risk prediction, and rapid response mechanisms.

Plagiarism Detection Tools

There is little to no research supporting the claims of accuracy found in the promotion of gen AI detection tools. Most are known to be woefully inadequate, producing significant rates of false positives and false negatives. Many do not reveal their methods or provide data to support their claims. At this time, we cannot recommend use of gen AI detection tools. 

Research is emerging on features commonly found in some gen AI output. Information will be provided as it becomes available.

Publisher Policies

 

Prior to using generative AI tools in a project you plan to submit for publication, ensure that the journal and publisher you are targeting permits the inclusion of AI-generated text and images in manuscript submissions. Below are exerpts from select publishers.  VISIT PUBLISHER PAGES FOR FULL DETAILS.

Train Your Brain!

Misinformation & Bias in AI

Source: Georgetown University

Misinformation

While generative AI tools can help users with such tasks as brainstorming for new ideas, organizing existing information, mapping out scholarly discussions, or summarizing sources, they are also notorious for not relying fully on factual information or rigorous research strategies. In fact, they are known for producing "hallucinations," an AI science term used to describe false information created by the AI system to defend its statements. Oftentimes, these "hallucinations" can be presented in a very confident manner and consist of partially or fully fabricated citations or facts.

Certain AI tools have even been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous. 

Additionally, the information presented by generative AI tools may lack currency as some of the systems do not necessarily have access to the latest information. Rather, they may have been trained on past datasets, thus generating dated representations of current events and the related information landscape.

Bias

Another potentially significant limitation of AI is the bias that can be embedded in the products it generates. Fed immense amounts of data and text available on the internet, these large language model systems are trained to simply predict the most likely sequence of words in response to a given prompt, and will therefore reflect and perpetuate the biases inherent in the inputted internet information. An additional source of bias lies in the fact that some generative AI tools utilize reinforcement learning with human feedback (RLHF), with the caveat that the human testers used to provide this feedback are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.       

Related Recommendations  

  • Meticulously fact-check all of the information produced by generative AI, including verifying the source of all citations the AI uses to support its claims.
  • Critically evaluate all AI output for any possible biases that can skew the presented information. 
  • Avoid asking the AI tools to produce a list of sources on a specific topic as such prompts may result in the tools fabricating false citations. 
  • When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
  • Always remember that generative AI tools are not search engines--they simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms.

Selected Readings