Advanced Security

A computer can set up thousands of fake email accounts in just seconds, or generate whole new fake identities causing chaos to many businesses daily. If you look at the variety of computer based attacks now being deployed by artificial intelligence, it’s extremely hard to keep yourself protected, which is why VICAP developed our unique Captcha.

A computer can set up thousands of fake email accounts in just seconds, or generate whole new fake identities causing chaos to many businesses daily. If you look at the variety of computer based attacks now being deployed by artificial intelligence, it’s extremely hard to keep yourself protected, which is why VICAP developed our unique Captcha.

VICAP has been independently tested and evaluated under advanced laboratory conditions as part of an extensive program of intensive security and machine learning tests. The results confirm that VICAP’s Captcha was robust against the most sophisticated optical character recognition (OCR) software and image recognition software on the market.

VICAP was also tested using advanced artificial intelligence techniques and deep learning mechanisms in one of the most advanced information security laboratories in the UK. The results were simply outstanding. Not a single one of the current image recognition and AI mechanisms were able to recognise and bypass VICAP.

Using our ingenious technology, we have managed to create a reliable and robust solution to protect online businesses of all sizes. Not only is our Captcha the most reliable test on the market, but it is also one of the fastest and most convenient for human users to use.

For the full list of our work on VICAP security please refer to our online publications or click here.

VICAP’s Current Security challenges

The machine learning algorithms and Optical Character Recognition (OCR) software in place today is likely to include at least one of the mechanisms below. They are becoming increasingly successful at deciphering and recognising the current text-based Captcha models. According to scientific research, nearly all of the current text-based and image-based Captchas on the market are vulnerable against these recognition methods.

Websites big and small are being relentlessly targeted by automated computer attacks. Not all businesses are prepared for this or have the resources to withstand the chaos this causes to their business. Despite putting various protective mechanisms in place on their websites, businesses are still being successfully attacked by destructive AI advances.

The results of our advanced security experiments confirm that our technology is a breakthrough against these AI attacks. The independent tests conclude that VICAP is 100% secure and robust against current character recognition and Artificial Intelligence techniques. Businesses can now enjoy peace of mind and confidence that their websites are well protected against the ever increasing bot attacks.

Optical Character Recognition

Advanced Optical Character Recognition (OCR) forms the basis of most online protection mechanisms. Current Captcha’s are being solved easily by the ever improving OCR software which can easily recognise and decipher characters by using different techniques. Here are some of the most common methods currently being adopted by bots…

Colour Matching Technique

The Colour Matching Technique is designed to use background noise to lessen the clarity of the characters, often using colours or patterns. Again, the first step any OCR or character recognition software will take to decipher and recognise the text, is to separate the text from the background noise. Once the characters are isolated it’s far easier to decipher what they are.

If the characters are distorted slightly, a large database of distorted characters would still offer a close match so it isn’t difficult for the identity of the isolated characters to be established.

VICAP is designed using monochrome pixels meaning that each image is only made of 1-bit per pixel - black or white. Therefore, both the object and the background noise are printed using the same colour. Using the same colour on both the object and the background noise makes it almost impossible for computers to separate the object pixels from the background noise.

Segmentation

One method that OCR software uses to recognise text, is by segmenting and separating the characters within an image and isolating them in order to analyse them. Having extracted a single more defined image then allows the software to compare the aesthetic match of the image to its database of stored images until it finds the closest match.

The example below shows an example of one of the current Captcha models we often see being used. The diagram shows how easy it is to break this type of Captcha based on the colour difference between the object and the background noise. The characters were simply separated from the background noise which then allowed the bot to match the well defined characters to it’s stored images. Once a process has been established to break these types of test, a bot can then progress to solving hundreds of thousands of Captcha test of the same type, rendering all of them useless.

VICAP advances even further on this technology by only displaying part of the characters within a frame. Individual frames will confuse the AI machines as they only partially display the characters and then wrap those characters with background noise. Computers will try to solve each frame individually so no useful information about the object can be retrieved. Not a single current optical recognition programme could identify the characters and letters in the VICAP Captcha in order to separate them.

Frames Aggregation Technique

VICAP also uses the smart noise filter to make the sequence of the frames unpredictable. From the computers point of view, it will not know or be able to ascertain the sequence of the frames. It will seem to the computer that they are just being generated in real time and based on random function. It cannot make any ‘sense’ of the order of the frames.

However, our sophisticated visual system is easily able to superimpose all of the VICAP frames based on our perception of density. The speed of the frames allows our eyes to see the hidden characters very easily. We are also able to apply human memory to apply a ‘sense’ check to the images - we will do this so quickly and intuitively we won’t even notice we are doing this. Even if a machine learning system superimposes the VICAP frames, it will get nothing but pure random noise and no order to the images.

We enhance the security of VICAPs Captcha even further by injecting random frames along the original frames on the sequence. This makes it even harder for computer recognition software and artificial intelligence systems to analyse and differentiate the VICAP frames. As the sequence and frequency of the pixels are mixed up due to the presence of the random frames, this further confuses the computers by appearing as further random and undecipherable noise.

Brute Force Attack (or Dictionary Attack)

Brute Force Attacks or dictionary attacks are also known as one of the most dangerous threats to Captchas. These types of attacks try to guess many different answers to Captcha randomly to find the correct answer. They have access to vast databases of stored images which it can assess the newly deciphered image against. In just a tiny amount of time it can process thousands of attempts to solve the Captcha until it finds its match.

Brute force attacks are not possible with a VICAP Captcha. Thanks to VICAP’s smart security application, our Captcha generates a new set of frames after every incorrect input. Using a brute force attack approach would require 7,962,624 different combinations in order to find the correct answer. Generating a new set of frames after every single incorrect guess gives a 0% chance for brute force or dictionary attacks to defeat VICAP.

Still want to know more about VICAP’s advanced security? Please refer to the full list of international and award-winning publications about VICAP security or click here.