A Decision-Making Tool for Creating and Identifying Face Sketches.

: A criminal can be quickly identified and prosecuted using a face sketch based on an eyewitness description . Several applications for converting hand-drawn face drawings and using them to automatically identify and recognize the suspect from the police database have been proposed in the past, but the existing system dealt with some drawbacks. It featured a lot of flaws, including as a limited facial features kit and a cartoonish feel to the constructed suspect face, which made it much more difficult to use these apps and acquire the results and efficiency that were required. In this paper, we present a stand-alone tool that allows users to create composite face sketches of suspects without the need for forensic artists. The application offers a drag-and-drop feature that can match the produced composite facial sketch with the criminal database in real time. This can be done considerably more rapidly and efficiently using deep learning and cloud infrastructure.


I. Introduction
A sketch is a design that is created fast and can be made with limited information. Sketches are widely used by artists as a preliminary step in the creation of more complex paintings or drawings. In law enforcement, sketching techniques are frequently employed to determine suspects from a memory by a witness. A face sketch drawn based on the description provided by an eyewitness can easily identify and bring a criminal to justice; however, in today's world of modernization, the traditional method of hand drawing a sketch is not found to be as effective or time saving. They are primarily manual, painstaking, and may not lead to the correct identification of the guilty party.
When the sketch is complete, it is compared to images in the department's possession in order to identify the culprit. Only if a person has been convicted at least once, then their photo is made public. This recognition was previously done manually, which was a time-consuming process. Then came the automatic recognition techniques. Many automatic recognition algorithms have been established to date, and the work listed has attempted to improve these techniques. * e-mail:manishbhoir50@gmail.com * * e-mail:chandangosavi14@gmail.com * * * e-mail: prathameshg.2210@gmail.com * * * * e-mail:bhavana.alte@rait.ac.in Uhl et al. [11] began work on automatic face sketch recognition system in 1994, proposing a method based on Principle Component Analysis (PCA).After that Tang and Wang [16] did the majority of the work in this field. An automatic method for recognizing photos from a database using a sketch had been introduced, in which a sketch was not directly matched with a photo due to the significant differences in shape and texture between a sketch and a real photo. As a result, the database was converted from photos to sketches using the Eigenface method, which used the Karhunen-Loeve Transform. It also generated feature vectors for the database sketches and the test image. The sketch and face feature vectors were then matched. Dataset used divided the face sketch into patches giving unsatisfactory results.
Patil and Shubhangi et al[6]created a matching system using a Geometrical Face Model. The AdaBoost algorithm was used to identify the face region, and then the geometrical structure of the face was used to mark the main facial components such as the eyes, nose, and mouth. Then, using Weber Local Descriptor, a texture feature was extracted from each facial component (WLD). For classification, an artificial neural network (ANN) was used. This system was incredibly complex to develop and costes a lot of money to run. Liu, Bae et al. [9] developed an automatic face-sketch recognition system that used joint dictionary learning to match photos from two different modalities (photos and sketches). Dalal, Vishwakarma et al [2] proposed a feature-based matching technique to accomplish this. A feature vector containing the features of interest is used in feature-based matching. The facial image had the characteristics of a histogram of oriented gradients (HoG) and a grey level co-occurrence matrix (GLCM) (whether it is a drawing or a photo). The use of handdrawn facial sketches was a limitation, as was the time commitment.
The foregoing applications and needs prompted us to consider developing an application that would allow users to upload ITM Web of Conferences 44, 03032 (2022) https://doi.org/10.1051/itmconf/20224403032 ICACC-2022 hand-drawn individual features to the platform, which would then be converted into the application's component set, rather than simply selecting a set of individual features such as eyes, ears, mouth, and so on to create a face sketch. As a result, the produced sketch would resemble a hand-drawn sketch much more closely, making it much easier for law enforcement authorities.
Our platform would also allow the law enforcement team to upload a previous hand-drawn sketch in order to leverage the platform's considerably more efficient deep learning algorithm and cloud infrastructure to identify and recognize the suspect. The machine learning system would learn from the sketches and the database to recommend to the user all of the comparable facial traits that may be employed with a single selected feature, reducing the platform's timeframe and increasing its efficiency.
The remainder of this paper is organized as follows: Literature survey with reference to our topic is given in section II. Section III provides proposed work. Result & analysis are described in section IV. Section V gives idea of future work.

II. Literature Survey
Face sketch construction and recognition have been studied extensively using a variety of methods. Yasmeen Bashir, Kamran Nawaz, and Anna, along with Dr. Charlie Frowd, Petkovic[8] created a stand-alone programme for building models. The initial system was found to be time consuming and confusing, so a new approach was adopted in which the victim was given a list of faces to choose from and was instructed to select similar faces that resembled the suspect, with the system then combining all of the selected faces and attempting to predict the criminal's facial composite automatically. The results were promising, with 10 out of 12 composite faces correctly named, with 21.3 percent when the witness was assisted by a department employee in constructing the faces and 17.1 percent when the witness attempted to construct the faces on their own. The facial created had limited scope of increasing accuracy and the platform was time consuming as well as confusing.
Fang, Yuke& Deng et al [1] first developed an Identity-Aware CycleGAN (IACycleGAN) model that supervises the image generating network using a new perceptual loss. CycleGAN's photo-sketch synthesis was improved by paying closer attention to the synthesis of crucial facial parts like the eyes and nose, both of which are vital for identifying oneself. They also developed a mutual optimization approach between the synthesis model and the optimization model. IACycle-GAN iteratively synthesises better images, and the triplet loss of the produced images increases the recognition model. With real-life examples Using the widely used CUFS and CUFSF formats, extensive tests were conducted on both photo-to-sketch and sketch-to-photo tasks.This method was extremely difficult to implement and had a significant computational cost.
SahilDalal, Vishwakarma et al [2] introduced a feature-based matching mechanism. A feature vector containing features of interest is used in feature-based matching. In terms of histogram of oriented gradients (HoG) features and grey level cooccurrence matrix (GLCM) features, the facial picture (whether it is a drawing or a photo) The initial step was to compute the characteristics, which enhances the likelihood of correct matches. It can be depicted in a variety of ways. The results of the experiment show that the dataset used to divide the face sketch into patches led to a decline in accuracy.
C.Galea&Farrugia et al [3] used prominent and state-of-the-art algorithms to thoroughly evaluate using publicly available datasets, acting as a standard for future algorithms. The proposed framework had been proved to minimise costs when compared to a leading technique. The mistake rate for seen sketches is reduced by 80.7 percent, and the mean is lower. For real-world forensic sketches, retrieval rank increased by 32.5 percent.The use of hand-drawn facial sketches was a limitation, as was the time commitment.
Wan ,Lee ,Jong et al [4] The original training images and sketches were concatenated with high-pass filtered image patches of the training sketches to create a joint training model. The usage of hand-drawn facial sketches and the length of time required were both limitations.
Sharma & Bhatt et al [5] employed an investigation to automatically detect the person's face. PCA was used to extract crucial information from the input images. Later, linear discriminant analysis, multilayer perceptron, naive Bayes, and support vector machine were used to test it. This method was extremely difficult to implement and came at a high cost in terms of computation.
Another method proposed by Anil K Jain and Brendan Klare [10] was sketch to picture matching, which used SIFT Descriptor and provided results based on the measured SIFT Descriptor distance between the face photos in the database and the sketches. The algorithm first changes the face photos using a linear transformation based on Tang and Wang's proposed model, and then uses the sketch to measure the SIFT descriptor distance compared to the face photo, as well as distance between images in databases for higher accuracy. The experimental results reveal that the datasets used in the experiment were quite comparable to those used by Tang, and that the inclusion in the algorithm was the measurement of the descriptor, which provided a superior result and accuracy than the model offered by Tang and Wang.
Zhang et al. [17] In 2019, presented an automatic and effective photo to sketch synthesising method based on Dualtransfer, which includes both interdomain and intradomain transfer. This method is extremely difficult to implement and has a high computational cost.
The common issue with all of the proposed algorithms was that they compared the face sketches with human faces that were usually front facing, making it easier to map both in drawn sketch and human face photograph. However, when a photograph or sketch was collected with their faces in different directions, the algorithms were less likely to map and match with a front facing face from the database.
There have even been systems proposed for composite face construction, but most systems used facial features that were taken from photographs, then selected by the operator as described by the witness, and finally compiled to form a single human face, making it much more difficult for humans as well as any algorithm to match it with a criminal face because each facial feature was taken from a separate face photograph with various dissimilarities, making it much more complicated for humans as well as any algorithm to match it with a criminal face.

III. Proposed work
In this paper's proposed work, we design and develop our platform in two stages: In this module, an accurate composite face sketch can be created utilising predefined facial feature sets as tools, which can be resized and rearranged as needed/described by the eyewitness.The human face is divided into numerous facial parts such as the head, eyes, eyebrows, lips, nose, ears, and so on are available for usage (as shown in Figure 2).
The flowchart in Figure 1 depicts the users' journey through the platform in order for the platform to construct an accurate face sketch based on the description. The dashboard is designed to be simple in order to encourage no professional training before using the platform, thus saving time and resources for the Department. When a user selects a face category, a new module appears to the right of the canvas, allowing the user to choose an element from a list of face elements to build a face sketch. Based on the eye witness's description, this option can be chosen.
When the elements are selected, they appear on the canvas and can be moved and placed according to the eye witness's description to create a more accurate sketch(as shown in Figure  3).However, the elements have a fixed location and order to be placed on the canvas, such as the eye elements being placed above the head element regardless of the order they were selected. The same goes for each facial element.
The final module has options to improve the dashboard's usability. For example, if a user selects an element that should not be picked, this may be corrected by utilising the option to erase that element, which is visible by selecting the face category from the left panel. The most essential buttons are in the right-hand panel, which also features a button to entirely delete anything on the dashboard's canvas, leaving it blank.
Then there's a button to save the completed face drawing as a PNG file for easier access in the future. Depending on the Law Enforcement Department, this could be any position on the host computer or on the server. The dashboard is made up of five primary modules. The first and most essential is the Canvas, which is located in the centre of the screen and houses the face sketch components and features that aid in the creation of the face sketch.
Creating a face sketch would be difficult if all of the face elements were given all at once and in an unorganised fashion, making the process difficult for the user and making it difficult to generate an accurate face, which would be contrary to the proposed system's objective. To solve this problem, we decided to order the face elements according to the face category they belong to, such as head, nose, hair, eyes, and so on, making it much easier for the user to interact with the platform and build the face sketch as shown in Figure 2. This is featured in the left-hand column on Canvas on the dashboard, where clicking on a face category gives the user access to a variety of other face structures.
When it comes to the various face elements in a particular face category, we could have multiple and n number of elements for a single category, so to solve this, our platform will use machine learning in the future to predict similar face elements or predict and suggest the elements to be selected in the face sketch, but this will only work once we have appropriate data to train the model on this algorithm and work to improve the platform. Basically try to recognise the face if it exists in the database, and if it does, we display it together with its meta data.
The flowchart shown in Figure 4 depicts the user flow as it is followed by the platform to provide an accurate face sketch based on the description. The dashboard is designed to be simple in order to encourage no professional training to be completed prior to using this platform, thus saving time and resources for the Department. The platform presents the matched face along with the similarity percentage and other details of the person (shown in Figure 7)from the records after mapping the sketch and matching the face sketch with the records and finding a match. The platform that displays all of this, as well as the matched person, is depicted in the diagram below. The above Figure 5 shows that the first step before using the platform to detect faces is to train and make the platforms algorithm recognise and assign ids to the face photos of the users in the existing data with the law enforcement department. The platform's algorithms link to the records and break down each face photo into smaller features, assigning an ID to each of the numerous features created for a single face shot. Now, the Module is being executedwhere the user first opens either the hand drawn sketch or the face sketch constructed on our platform saved in the host machine, after which the opened face sketch is being uploaded to the Law enforcements server housing the recognition module so that the process or the data of the record are not tampered with and accurate.
are secure and Once the sketch is sent to the server, the algorithm traces the picture to learn the features in the sketch and maps them as indicated in the Figure 6 below in order to match them with the features of the face photos in the database.

IV. Results and Conclusion
From the first splash screen to the final screen to harvest data from the records, the project "Face Sketch Construction and Recognition' will be created, built, and tested with real-world scenarios in mind. Security and accuracy will be paramount in every scenario. When compared to previous research in this sector, the platform even contains elements that are different and unique, boosting overall security and accuracy by standing out among other related studies and proposed systems in this field. The platform can also be connected to social media, as social media platforms are a rich source of data in today's world. Connecting this platform to social media would improve the platform's ability to find a much more accurate match for the face sketch, making the process much more accurate and speeding up the process.
When compared to relevant research in this sector, the platform could include features that are different and unique, as well as easy to upgrade, boosting overall security and accuracy by standing out among other related studies and proposed systems in this field. The experiment was conducted using sketches that were previously seen. The dataset is downloaded and tested from an open store (available for free on the web) where the sketches are viewed. The observed sketches were acquired as a cluster of sketchphotographic sets from the CUHK face sketch database, which contained 188 sets. As a result, there are 188 sets of viewed sketches (as shown in Figure 7)  When tested with various test cases, test scenarios, and data sets,(as shown in Figure 7,8,9) the platform showed good accuracy and speed during the face sketch construction and recognition process, providing an average similarity of more than 90%, with system giving accuracy of 94.6% and a confidence level of 100%, which is a very good rate according to related studies in this field.

V. Future Work
The "Face Sketch Construction and Recognition' [8]Charlie Frowd, Anna Petkovic, Kamran Nawaz and Yasmeen Bashir, "Automating the Processes Involved in project is currently meant to work on a limited number of scenarios, such as face sketches and matching those sketches to face images in law enforcement records.
The platform can be greatly extended in the future to work with a variety of technologies and scenarios, allowing it to investigate numerous media and surveillance mediums and produce a far wider range of outputs. Using 3D mapping and imaging techniques, the platform can be adjusted to match the Face sketch with human faces from video feeds, and the same may be applied to CCTV surveillance to conduct face recognition on live CCTV footage using the Face Sketch.