Concerns regarding the safe and ethical use of artificial intelligence have grown for good reason. It’s crucial that organizations who choose to use an AI like New Dialogue follow our terms of service which include using the AI safely, acting ethically and responsibly and considering how you use our service.
New Dialogue AI is developed using an Open Source Large Language Model (LLM), along with other components including both open and closed technologies. The team behind New Dialogue have been working in enterprise software design and development for more than fifteen years. Our other products manage sensitive healthcare information. We understand privacy and security.
We ensure data security and privacy is at the core of our development of New Dialogue. In order for sensitive information to be protected, it’s your responsibility to ensure you have strong data protection mechanisms in place when utilizing our service. This extends to data privacy laws including GDPR and HIPAA compliance, encryption, and access restrictions.
Trust-building among stakeholders requires transparency. We ask you to consider how you introduce the use of our service to your users, and that you share and discuss decisions generated by our service. This entails conveying the constraints of AI systems.
Biases from the data used to train open source AI systems might be inherited, producing unfair or biased results. Our service may produce unfair, unexpected and biased results if the prompts entered by your users are designed to generate such results. We ask you to actively seek to detect and reduce prejudice driven prompting if your users are going to utilize our service properly. This entails using bias-reduction strategies, employing representative and varied training data, and regularly auditing usage by undertaking a review of the prompt history and Saved Items captured within our service.
It's critical to establish explicit ethical standards and governance structures for the use of AI. An AI usage committee should be established within your organization to oversee your use of our service, establish rules and guidelines for users to follow, and how to handle ethical issues. Your organization's beliefs and practices should be reflected in these recommendations.
Although AI is capable of automating manual work for us, we encourage human oversight to remain in place. It is not advisable for organizations to fully depend on self-generating AI systems, particularly when making crucial decisions. In order to make sure AI is in line with corporate objectives and values, human specialists should still perform checks and balances.
To guarantee AI systems' continued performance and safety, frequent monitoring and auditing should be conducted. This entails monitoring the effects of results generated by user prompting, spotting problems, and implementing required fixes. Finding prejudice, privacy violations, or other safety issues can be aided by auditing the prompt history within our service.
Users should be educated on appropriate and safe usage practices when using our service. This entails being aware of AI's potential and constraints, as well as the associated ethical issues. When using AI, training programs can assist users in making wise prompt selections.
To detect any dangers associated with your use of AI, organizations should consider, plan and undertake assessments to help reduce such risks. By taking a proactive stance, problems may be resolved before they occur and the effects of unanticipated effects can be reduced.
Your organization needs to be aware of the latest developments in AI-related laws and regulations. Adherence to pertinent legal statutes and regulations is vital. This covers laws pertaining to data privacy, liability, and intellectual property rights.
Organizations that prioritize fairness, transparency, governance, and data security will be better able to reap the benefits of AI while reducing its risks. By adopting a complete approach to AI safety, organizations can navigate the AI future with success and morality.