EuroGNC AI Policy
Whether artificial intelligence (AI) was used or not, starting with the 2026 conference the inclusion of a section called “Declaration of Use of Artificial Intelligence” is mandatory (cf. authors instructions)
Introduction
The EuroGNC AI policy explicitly forbids some of the potential uses, others are limited to specific scenarios or scopes, and finally others are deemed acceptable. We acknowledge that AI is evolving at a very high pace and the policy might need to evolve to follow new capabilities and uses of AI. We welcome an open dialogue with the authors on new tools and uses, with the aim of updating our AI policy in a timely manner.
The purpose of this policy is to provide clarity for the authors in the currently rapidly evolving AI environment by clearly indicating the uses that are allowed and those that are prohibited.
The overarching goal remains to ensure the scientific quality and integrity of the EuroGNC papers. Possible productivity gains enabled by leveraging AI-tools can be allowed only if they do not conflict with this goal.
The scientific integrity includes that the authorship of the work, the underlying ideas, and their presentation remain those of the authors of the paper. At the time of writing (2025), most popular AI tools are facing patent and copyright infringement lawsuits. In many cases, the improper use for training of data without the proper consent of the authors and copyright holders is well documented, and the authorship and copyright infringement claims may potentially extend to the content produced by these tools.
The authors and their employers are responsible and accountable for any legal issues in relation to the content and media/artwork of their paper, including when produced by or with the help of an AI tool. The CEAS EuroGNC conference only accepts papers released by the authors according to the licensing conditions explained in section 3 of the detailed authors instructions.
Detailed Acceptable and Prohibited Uses of Artificial Intelligence
- Any use of AI must be disclosed in the “Declaration of Use of Artificial Intelligence” section. This section is mandatory even if no AI tool was used (cf. page 8).
- The authors are ultimately responsible and accountable for the content of the work (not only legally, but also scientifically). Therefore, any content for which AI was used must be carefully reviewed, corrected, and completed if needed.
- Any use of AI as an integral part of the research topic (e.g., use of deep learning for relative navigation or use of large language models to interact with human operators) is allowed.
- The use of AI (in particular generative AI) to write/generate parts or all of the text, images, or videos is not allowed. This includes summarizing (e.g., for the abstract, the conclusions, or for the literature review) as well as generation of artwork for illustration.
- The use of AI for proofreading or translation is deemed acceptable, provided that the authorship remains evidently those of the authors. Again, the authors remains ultimately responsible for the correctness of the content.
- The use of AI in the research work as assistance to improve productivity (e.g. during programming or data analysis) is deemed acceptable but must be documented with great details. The documentation contains details of exactly which tools and versions were used, and for which parts of the work they were used. Any potential impact on the results must be discussed by the authors, who ultimately are responsible for these results.
- The semi-transparent use of AI as part of an equipment and which do not denaturate the content/data is also deemed acceptable. A typical example could be using a photo camera with AI-enhanced calibration, focusing, brightness, or colour balance features. The authors should be aware that such AI-based feature may fail in unusual scenarios and be cautious about the possible impact on the results of their work. Acknowledging that it is sometimes difficult to know whether AI-based algorithms have been used by manufacturers (especially for consumer electronics products), it is highly recommended but not mandatory to disclose such uses, especially if there is a risk that these algorithms may have affected the conclusions of the work.
- Recognizing that nowadays many search engines include AI-generated summarised answers (e.g. Google Gemini), being influenced by seeing the outputs of these AI tools can hardly be prevented and do not need to be explicitly disclosed. However, any significant use of their output must be disclosed. For example, at the time of writing, the answer to a prompt like “how to compute the factorial of a number in C++” provided by the most common search engines includes detailed recipes to write the corresponding code or even several concrete implementations. The same applies for summaries of literature that such AI engines may have produced. Any use of detailed information provided by the AI engines from search engines must be disclosed as for any other use of AI.
- The use of AI for literature research appears to be rising along with the introduction of AI tools that are increasingly better at such tasks. Being able to read and understand the work of others remains a key competency that young scientists/engineers need to learn and which require regular training. The currently available AI tools are still lacking the critical thinking required for good literature research and also often provide very incomplete results. Therefore, the use of such tools for literature research (for searching, not for writing the literature review section!) is not recommended but tolerated, provided that they are properly documented and disclosed. Again, it is the responsibility of the authors to ensure that the literature research is complete and correct.
- The use of AI during the underlying investigations as assistance to improve productivity (e.g. during programming or data analysis) is deemed acceptable but must be documented with great details. The documentation includes which exact tools and versions were used and which parts of the work were they used for. Any potential impact on the results must be discussed by the authors, who ultimately are responsible for these results.
Consequences in Case of Violation of these Rules
Failure to comply with these rules or attempts to circumvent them as well as failure to properly disclose the use of AI in the work may result in rejection or withdrawal of the paper at any point of the process, even after the conference. For reviewers: the same rules apply to the preparation of reviews of the submitted papers. The organisation team may use AI tools to check for inappropriate or undisclosed uses of AI.
For Reviewers
The use of artificial intelligence is not allowed for the reviews (regardless of the tool, task, or purpose).
This includes using AI-tools for checking for the use of AI in the reviewed papers: if reviewers suspect an inappropriate or undisclosed use of AI by the authors, they should report their suspicion to the conference chair team, which will investigate the case, possibly with the help of AI tools.
Data Privacy Warning
Many of the AI tools currently available provide limited to inexistent data privacy. Authors are advised to be cautious with their inputs/prompts as many tools will keep a copy or even publish them with more or less anonymization. This means that copying chunks of text or other data to serve as input of such tools may require prior release by their employers or project partners.


LinkedIn