Post by account_disabled on Mar 13, 2024 0:02:02 GMT -6
Requirement for authorization or even a legal or infralegal definition despite the topic having been the subject of discussion about ten years ago after the publication of the book Flash Boys by journalist Michael Lewis which raised doubts about the legality of the activities of this type of participant in the stock markets. Currently the operations of HFTs in Brazil depend on adjustments within the scope of direct market access DMA services offered by B and furthermore their operations are subject to supervision by BSM Supervisão de Mercados and the CVM.
Tomorrow: specific authorizations?
Amy Webb's idea means going beyond a generic authorization for an activity such as analysis consultancy management or intermediation. Depending on the specific risks of an AI solution that replaces work subject to state authorization the product should be subjected to testing to ensure its suitability and above all compliance with ethical standards.
To illustrate the problem Webb mentions a research CG Leads in which it was presented that an algorithm would be capable of carrying out operations prohibited by law and hiding the illegality of its practice.
In this context in the same way that an aircraft needs to be approved to fly or a new drug requires the demonstration of results regarding its effectiveness and risks an AI solution whether in the financial market or in another context should as suggested by Webb pass some type of test including checking your decision-making process when faced with ethical dilemmas.
The suggestion is understandable given the fears about the risks related to AI some exaggerated and others still unknown.
However we must bear in mind the complexity of developing and carrying out tests to approve these algorithms given their ability to adapt to new data and scenarios behavior today may not be the same tomorrow or even their ability to circumvent these tests. . Add to this the difficulty faced by the State itself in terms of technical capacity and personnel to analyze sophisticated algorithms and decide what can or cannot be placed on the market.
In this context the most efficient solution from the point of view of regulatory impact still seems to be the responsibility of the product supplier or solidarity in a chain of suppliers. In this model despite the technology used and its complexity incentives can be adjusted to require the adoption of adequate risk management quality standards punishing negligent companies or those with ineffective controls.
Prior flexibility relying on subsequent enforcement has its side effects. In the face of a “tragedy” such as the Flash Crash or abrupt collapses such as Knight Capital which became insolvent due to the failure of an insufficiently tested algorithm subsequent accountability may be late and inefficient. The discussion is similar to the dilemma between formal conflict and material conflict of interests in relation to illicit acts relating to company administrators and controllers: is it better to prevent or cure.
Tomorrow: specific authorizations?
Amy Webb's idea means going beyond a generic authorization for an activity such as analysis consultancy management or intermediation. Depending on the specific risks of an AI solution that replaces work subject to state authorization the product should be subjected to testing to ensure its suitability and above all compliance with ethical standards.
To illustrate the problem Webb mentions a research CG Leads in which it was presented that an algorithm would be capable of carrying out operations prohibited by law and hiding the illegality of its practice.
In this context in the same way that an aircraft needs to be approved to fly or a new drug requires the demonstration of results regarding its effectiveness and risks an AI solution whether in the financial market or in another context should as suggested by Webb pass some type of test including checking your decision-making process when faced with ethical dilemmas.
The suggestion is understandable given the fears about the risks related to AI some exaggerated and others still unknown.
However we must bear in mind the complexity of developing and carrying out tests to approve these algorithms given their ability to adapt to new data and scenarios behavior today may not be the same tomorrow or even their ability to circumvent these tests. . Add to this the difficulty faced by the State itself in terms of technical capacity and personnel to analyze sophisticated algorithms and decide what can or cannot be placed on the market.
In this context the most efficient solution from the point of view of regulatory impact still seems to be the responsibility of the product supplier or solidarity in a chain of suppliers. In this model despite the technology used and its complexity incentives can be adjusted to require the adoption of adequate risk management quality standards punishing negligent companies or those with ineffective controls.
Prior flexibility relying on subsequent enforcement has its side effects. In the face of a “tragedy” such as the Flash Crash or abrupt collapses such as Knight Capital which became insolvent due to the failure of an insufficiently tested algorithm subsequent accountability may be late and inefficient. The discussion is similar to the dilemma between formal conflict and material conflict of interests in relation to illicit acts relating to company administrators and controllers: is it better to prevent or cure.