The regulation will categorise systems into four categories; Unacceptable risk, High risk, Limited risk, and Minimal risk.
The AI Act looks to establish a legal framework to ensure the ‘trustworthy development’ of AI systems - and prioritises safe use, transparency, and ethical principles.
First of all, update (or create) your AI policies and procedures - if anything goes wrong, these will come under scrutiny, so make sure internal and customer-facing policies are renewed to reflect the AI Act values, like transparency, non-discrimination, and fairness.
Staff who use these systems will be affected, and some staff will certainly be required to carry out the human oversight section of the regulation, and risk management will be much easier to audit if everyone understands the dangers.
Limited risk systems require transparency, and must be developed to ensure that users are fully aware that they are interacting with an AI model.