Can justice ever be perfectly served?
The common goal in forensics and criminal justice is to solve crimes and bring the criminal to prosecution without error. Since humans are involved in this process, the possibility of error is ever-present.
According to the Innocent Project, the number of prisoners falsely accused of a crime is an estimated 1%. Other statistics regarding false convictions suggest the number is as high as 5-10%.
To diminish this startling statistic, AI can be implemented into this procedure to reduce errors. AI provides avenues for criminal investigators to use technology to bring justice. By using intelligent machinery, the workload of investigators can become easier to manage and even more accurate.
Here are three ways AI helps solve crime:
Video / Photo Analysis
Crime happens frequently, at times too often to manage. A recent implementation of AI can help address crime as it occurs without the presence of law enforcement.
Check out what books helped 20+ successful data scientists grow in their career.
Researchers in Malaysia are developing AI software for CCTV cameras to lower the number of street crimes in the country. This software can autonomously detect these crimes only by analyzing the footage in the security camera.
This software would do the following (in order):
- Detects if a person in the footage is wielding a weapon.
- Inspect if the suspect in question is engaging in “aggressive actions.”
- The software then informs law enforcement if a crime is suspected.
Researchers hope this AI technology will help police stop crime as it happens and deter potential crime from occurring by using facial and gait recognition technology to identify criminals and gauge aggressive behavior.
Processing Older DNA
AI technology has become highly efficient at processing DNA. Since DNA evidence was introduced to forensics in 1986, solving crimes has become easier and more concrete. As AI technology improves, so does DNA processing.
With the help of direct-to-consumer genetic genealogy databases, over 50 cold cases were solved, from missing persons to murder. Also, one of the most prominent serial killers, dubbed the “Golden State Killer,” was eventually brought to justice using the same type of databases after living in society freely for decades. The 12 cases surrounding the “Golden State Killer” were reexamined using newer DNA technology.
With 6,544 unsolved cases in 2019, AI and DNA technology could help uncover answers for investigators to lower the number of cold cases.
Risk Assessment Tools
Risk assessment tools are used to assess the potential suspect and predict how much of a danger they pose. RAIs (Risk Assessment Instruments) use AI to determine the probability of a criminal reoffending and provide help with the prosecution process.
Studies suggest that algorithmic RAIs have the potential to create consistent and accurate results. Researchers found that judges who used checklist-style RAIs, which determine things like age and the number of times a suspect failed to appear in court, yielded more consistent results in determining risk assessment.
However, implementing AI into criminal justice has led to some controversy, with some deeming these tools unethical and unreliable.
In China, Uyghurs (a muslim minority) are disproportionately prosecuted using AI facial recognition technology. Unlike the CCTV technology implemented in Malaysia, this use of constant surveillance that singles out marginalized groups is an example of unethical uses of AI technology.
COMPAS is a widely used risk assessment tool based on an intelligent algorithm. In 2009, a study suggested that COMPAS failed to give accurate data and results and showcased bias towards ethnicity and marginalized groups as the software learned through data sets. The study concluded that the risk assessment of African-American men was the most inaccurate.
The inaccuracy found in COMPAS is one of many ways AI can produce different biases. Although arguments show that AI is beneficial in determining crime and punishment, other facts suggest that AI is not yet to the point of surpassing human error.