In order to use our Yosnalab shopping services, you need to read carefully about our terms and conditions. If You disagree any part of terms then you cannot access our service.
I agree to pay the rental rate for the period we used the product and in transit and further agrees to promptly return the products at the end of the rental period in the same condition as received.
I will give alert to Yosnalab if my contact information changes.
I also agree to pay for any damages to, or loss of, the rented merchandise occurring during their time of possession or because of loss and or damage of the products. Upon return and inspection if any and all repairs necessary and or accessories missing that were itemized will be charged at our current rates and billed to you.
A full day rental is charged, even for a partial day use. There are absolutely no refunds for early returns.
The products can be used upto 4 weeks only, Cost will be calculated as per day of usage. We are not responsible for the damage of materials or other liability of any kind resulting from the use or malfunction of the equipment. We will not return the initial deposit of money until the product is returned back.
I am responsible for keeping track of my due dates. I understand that any notices sent out by Yosnalab are a courtesy only and failure to receive them does not excuse me from any charges.
A copy of both sides of their valid institution identification card need to be present. Depositing the money can be done only in the form of cash, not through credit/debit cards.
If you have questions or suggestions, please contact us.
Sign Language Recognition may be a breakthrough for serving to deaf-mute individuals and has been researched for several years. Unfortunately, each analysis has its own limitations and are still unable to be used commercially. Some of the researches have been known to be successful for recognizing sign language, but require an expensive cost to be commercialized. Nowadays, researchers have gotten a lot of attention for developing linguistic communication Recognition which will be used commercially. Researchers do their research in various ways. It starts from the data acquisition methods. The data acquisition method varies because of the cost needed for a good device, but a cheap method is needed for the Sign Language Recognition System to be commercialized. The way utilized in developing Sign Language Recognition also varies between researchers. Each method has its own strength compared to other methods and researchers are still using different methods in developing their own Sign Language Recognition. Each method has its own limitations compared to other methods. The aim of this project is to review the sign language recognition using ML approaches and find the best method that has been used by researchers. Hence the other researchers can get more information about the methods used and could develop better Sign Language Application Systems in the future.
Why: Problem statement
Deaf people face many irritations and frustrations that limit their ability to do everyday tasks and emergency situations. Research indicated that Deaf people, especially Deaf children, have high rates of behavioral and emotional issues in relation to different methods of communication. Most people with such disabilities become introverts and resist social property and face-to-face socialization. The inability to speak with family and friends can cause low self-esteem and may result in social isolation of Deaf people. It is not only that they lack social interaction however communication is additionally a major barrier to Deaf-mute health care. In such conditions, it becomes difficult for the caretaker to interact with the deaf person.
Different medical treatments are available for the deaf community in order to get rid of their deafness but the cost of these treatments are expensive. There are several problems which are encountered by Deaf-mutes and normal individuals during communication. The problem isn’t confined only to a Deaf-mute person who is unable to listen to or speak, however another downside is lack of awareness of Deaf culture by normal individuals. The Deaf people are struggling to communicate with the people in emergency situations.
How: Solution description
Here, we found two types of solutions for this problem.
Showing hand signs for an emergency situation. If the emergency hand signs were shown by the person, suddenly the buzzer sensor will get turned on.
By pressing the button on the wristband, the buzzer sound will get turned on.
Sign Language Recognition:
Hand gesture is one of the methods used in sign language for non-verbal communication. Hand signs are most commonly used by deaf & dumb people who have hearing problems or speech problems to communicate among themselves or with normal people. Various sign language systems have been developed by many makers around the world but they are neither flexible nor cost-effective for the end users. Hence in this project, we introduced machine learning model that is able to automatically recognize sign language to help deaf and dumb people to communicate more effectively with each other or normal people. First we trained five emergency signs using a machine learning model (CNN). We also wrote Arduino code to make the buzzer ON and OFF. After that, we combine these two parts (ML model and Arduino code) using serial communication. So, if the person shows any one of the trained signs during an emergency situation, the buzzer will get ON. So that they can get help from the nearby people. The overall block diagram is as follows.
Pressing button on the wrist band:
In some situations, the people may forget to show the emergency hand signs. In those cases, the suffering people can indicate the emergency to others by pressing the button on the wristband that is fitted on him/her. The push-button is activated to ON state, then the signal will be sent to the microcontroller. The node MCU will get the signal, it will turn on the buzzer sensor, then the buzzer will alert the neighbors.
How is it different from competition
In the sign language recognition project we implemented two types of alert system. The first one is using a Machine learning model to recognize the hand sign and the second one is we used an Iot application to push the button the buzzer gets on. This is the uniqueness for this project. Various hand gesture systems have been developed by many makers around the world but they are neither flexible nor cost-efficient for the end users. But this project is cost effective and gave more accuracy and is simple to use for the end user.
Who are your customers
Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by deaf & dumb individuals who have hearing or speech issues to communicate among themselves or with normal people. Hence in this project introduced software system that presents a system prototype that is able to automatically recognize sign language to help deaf and dumb individuals to communicate more efficient with each other or normal people. Pattern recognition and Gesture recognition are the developing fields of analysis. Being a significant part in nonverbal communication and gestures are playing a key role in our daily life. Hand Gesture recognition system provides us with an innovative, natural, user friendly way of communication with the computer which is more familiar to human beings. By considering the similarities of human hand shape with four fingers and one thumb, the software aims to present a real time system for recognition of hand gesture on basis of detection of some form primarily based options like orientation, Centre of mass centroid, fingers status, thumb in positions of raised or folded fingers of hand.
Project Phases and Schedule
Install python software and all prerequisites like numpy, openCV.
First we collect input images and start training a model to recognize hand signs.
Find and segment the hand region from the camera vision using opencv.
Training: We have taken 5 hand signs that indicate emergency situations and trained those signs using CNN (Convolutional Neural Network) algorithm.
Prediction: If any of the trained signs shown by the person, it will indicate that as an emergency situation and make the buzzer sound on using serial communication.
Construction: The construction of the project is mainly based on Node MCU. Interfacing microcontroller, buzzer, battery, and the button by using a jumper wire. These all are fitted to the wrist band.
Connection: Connecting all the components by using a jumper wire. Interface all the components to the microcontroller. The microcontroller gets the power supply from the lithium battery. Connect the buzzer positive and negative to the microcontroller and the signal pin is connected to the microcontroller digital pin. And the button positive and negative pins are connected to the microcontroller to get the power supply.
Program: We use the Node MCU microcontroller, so the programming tool is an Arduino. In the programming flow, if the user presses the button, the program will send the signal to the buzzer, then it will buzz the sound for 2 minutes.
Testing: Finally, we tested and verified the connection and code of the program.
Anaconda tool: Anaconda may be a free and open-source distribution of the Python and R programming languages for scientific computing (data science, machine learning applications, large-scale processing, prediction analysis, etc.), that aims to clarify package management and implementation. The distribution includes data-science packages appropriate for Windows, Linux, and macOS. You can download anaconda tool for Python by clicking this link https://anaconda.org/anaconda/python.
Python version 3.7: Underneath the Python Releases for Windows find Latest Python 3 Release – Python 3.7. 4 (latest stable release as of now is Python 3.7. The advantages of Python 3.7 are Easier access to debuggers through a new breakpoint() built-in, Simple class creation using data classes, Customized access to module attributes, Improved support for type hinting and Higher precision timing functions.
Libraries: OpenCV 2.4.8, Numpy - I had been working with OpenCV, an open source computer vision library, and I needed an engineer a solution that would grab an image from the screen, then resize and transform the image into a NumPy array that my model could understand.
Node MCU: NodeMCU is a low-cost open-source IoT platform. It initially included firmware that runs on the ESP8266 Wi-Fi SoC Systems and hardware which was based on the ESP-12 module.
Battery: Lithium battery is the primarybattery that has metallic lithium as an anode. These varieties of batteries are named as lithium-metal batteries. They stand except for alternative batteries in their high charge density (long life) and high price per unit.
Buzzer: A buzzer or electronic device is an audio device, which can be mechanical, or piezoelectricity (piezo for short).
Typical uses of buzzers and beepers embody alarm devices, timers, and confirmation of user input like a click or keystroke.
Button: When activated, by pressing and holding the button down for a few seconds, the ESP8266 chip will send the signal to the buzzer.