In order to use our Yosnalab shopping services, you need to read carefully about our terms and conditions. If You disagree any part of terms then you cannot access our service.
I agree to pay the rental rate for the period we used the product and in transit and further agrees to promptly return the products at the end of the rental period in the same condition as received.
I will give alert to Yosnalab if my contact information changes.
I also agree to pay for any damages to, or loss of, the rented merchandise occurring during their time of possession or because of loss and or damage of the products. Upon return and inspection if any and all repairs necessary and or accessories missing that were itemized will be charged at our current rates and billed to you.
A full day rental is charged, even for a partial day use. There are absolutely no refunds for early returns.
The products can be used upto 4 weeks only, Cost will be calculated as per day of usage. We are not responsible for the damage of materials or other liability of any kind resulting from the use or malfunction of the equipment. We will not return the initial deposit of money until the product is returned back.
I am responsible for keeping track of my due dates. I understand that any notices sent out by Yosnalab are a courtesy only and failure to receive them does not excuse me from any charges.
A copy of both sides of their valid institution identification card need to be present. Depositing the money can be done only in the form of cash, not through credit/debit cards.
If you have questions or suggestions, please contact us.
Recently, there is a wide development and vast knowledge of computing techniques and due to numerous methods of programming, present user communication with the pointing and positioning devices with the use of the mouse, keyboard and pen are not really sufficient now. These devices are only limited to a few sets of commands. The use of human body parts for interaction such as the use of hands is a better choice. Our hands can be used as an input device for providing natural communication.
Hand movement tracking is one of the busiest areas of research in computer vision. It provides an easy way to interact or communicate with a machine without using any extra devices. If a user has limited technical knowledge about computer, then hand movement tracking enables the user to use the system easily.
Hand movement tracking is the study of how the user interacts with computers and computer’s techniques on successful interaction with human beings. Different codes or algorithms are used for the perfect outcome from human-computer interaction. Human body motions include the movements of the head, hands, fingers, etc. It is not necessary to use any special input devices for the interaction between humans and computer. It is our hand that is, used as a direct input device. By the hand movement, users can control the computers in a natural way.
Why: Problem statement
To communicate the movement performed by the user in a video sequence into understandable commands is the most difficult step in machines which measures the detected hand positions and its movement track.
How: Solution description
This project proposes a live recognition system from hand movement, which serves as an alternate option of a human-computer interface. We target hand gestures performed by hand motions and maybe around 10 hands signed digits are trained. The hand motion tracking with the most stand deviation is chosen for the gesture. And we use the histogram models to detect the gesture tracks. It may be said that the system achieves a recognition rate of 97.33%.
Mainly this project is to provide assistance for handicapped persons who cannot rely on moving mouse and cannot do the required task by using the computer. But they will be able to perform a few hand motions. Hand movement tracking is one of the best approaches for this purpose. This project presents a new algorithm for hand movement tracking.
In this application where we wish to track a user’s hand movement, a skin color histogram will be very useful. This histogram is used to subtract the background from a given image. it leaves parts of the image that contains skin tone. A much easier and simpler process to detect skin would be to find out pixels that are in a certain RGB or HSV range.
With the use of skin color histogram to find the components of the frame that contains skin. OpenCV provides us with a user-friendly method, which uses a histogram to differentiate the features in an image. We have used the same function to apply the skin color histogram to a frame.
In this project, we'll look at how we can create a deep learning model by a semantic segmentation.
Semantic segmentation refers to the process of joining each pixel of an image to a class label. These labels may include a person, car, flower, piece of furniture, or anything.
We can describe semantic segmentation as image classification at a pixel level. For example, in an image that has many buses, segmentation will label all the objects as bus objects. Anyway, a separate class of models called as instance segmentation is able to label the separate instances where an object appears in an image. This type of segmentation may be useful in applications that are used to count the number of objects or things, such as counting the amount of foot traffic in a mall.
How is it different from competition
It is the perfect project for learning and applying deep learning techniques for tracking the hand movement for helping the users. One of the most difficult challenges that I faced in tracking hand movement is differentiating a hand from the background and identifying the tip of a finger or any object. I have presented and explained my technique for tracking a finger or object, which I used in this project.
Who are your customers
The customers of this project are the physically challenged peoples, the public, the general audience and those who make use of computers and laptops.
Project Phases and Schedule
Phase 1: Framing
Phase 2: Masking
Phase 3: Models
Phase 4: Tracking
Tools used: Anaconda - Python version 3.6 and OpenCV.