Skip to main content
Login | Suomeksi | På svenska | In English

Vision based indoor positioning in a retail environment

Show simple item record

dc.date.accessioned 2011-06-14T10:51:50Z und
dc.date.accessioned 2017-11-06T12:23:05Z
dc.date.available 2011-06-14
dc.date.available 2011-06-14T10:51:50Z und
dc.date.available 2017-11-06T12:23:05Z
dc.date.issued 2011-02-01
dc.identifier.uri http://hdl.handle.net/10138/26581
dc.publisher Helsingin yliopisto fi
dc.publisher Helsingfors universitet sv
dc.publisher University of Helsinki en
dc.title Vision based indoor positioning in a retail environment en
ethesis.department.URI http://data.hulib.helsinki.fi/id/225405e8-3362-4197-a7fd-6e7b79e52d14
ethesis.department Institutionen för datavetenskap sv
ethesis.department Department of Computer Science en
ethesis.department Tietojenkäsittelytieteen laitos fi
ethesis.faculty Matematisk-naturvetenskapliga fakulteten sv
ethesis.faculty Matemaattis-luonnontieteellinen tiedekunta fi
ethesis.faculty Faculty of Science en
ethesis.faculty.URI http://data.hulib.helsinki.fi/id/8d59209f-6614-4edd-9744-1ebdaf1d13ca
ethesis.university.URI http://data.hulib.helsinki.fi/id/50ae46d8-7ba9-4821-877c-c994c78b0d97
ethesis.university Helsingfors universitet sv
ethesis.university University of Helsinki en
ethesis.university Helsingin yliopisto fi
dct.creator Uitto, Jara
dct.issued 2011
dct.language.ISO639-2 eng
dct.abstract Modern smart phones often come with a significant amount of computational power and an integrated digital camera making them an ideal platform for intelligents assistants. This work is restricted to retail environments, where users could be provided with for example navigational instructions to desired products or information about special offers within their close proximity. This kind of applications usually require information about the user's current location in the domain environment, which in our case corresponds to a retail store. We propose a vision based positioning approach that recognizes products the user's mobile phone's camera is currently pointing at. The products are related to locations within the store, which enables us to locate the user by pointing the mobile phone's camera to a group of products. The first step of our method is to extract meaningful features from digital images. We use the Scale- Invariant Feature Transform SIFT algorithm, which extracts features that are highly distinctive in the sense that they can be correctly matched against a large database of features from many images. We collect a comprehensive set of images from all meaningful locations within our domain and extract the SIFT features from each of these images. As the SIFT features are of high dimensionality and thus comparing individual features is infeasible, we apply the Bags of Keypoints method which creates a generic representation, visual category, from all features extracted from images taken from a specific location. A category for an unseen image can be deduced by extracting the corresponding SIFT features and by choosing the category that best fits the extracted features. We have applied the proposed method within a Finnish supermarket. We consider grocery shelves as categories which is a sufficient level of accuracy to help users navigate or to provide useful information about nearby products. We achieve a 40% accuracy which is quite low for commercial applications while significantly outperforming the random guess baseline. Our results suggest that the accuracy of the classification could be increased with a deeper analysis on the domain and by combining existing positioning methods with ours. en
dct.subject Computer vision en
dct.subject Scale-Invariant Feature Transform en
dct.subject SIFT en
dct.subject Indoor positioning en
dct.subject Density-Join based clustering en
dct.subject Scale-space en
dct.language en
ethesis.language.URI http://data.hulib.helsinki.fi/id/languages/eng
ethesis.language English en
ethesis.language englanti fi
ethesis.language engelska sv
ethesis.supervisor Nurmi, Petteri
ethesis.supervisor Myllymäki, Petri
ethesis.thesistype pro gradu-avhandlingar sv
ethesis.thesistype pro gradu -tutkielmat fi
ethesis.thesistype master's thesis en
ethesis.thesistype.URI http://data.hulib.helsinki.fi/id/thesistypes/mastersthesis
dct.identifier.urn URN:NBN:fi-fe201106141760
dc.type.dcmitype Text
dct.rights This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited. en
dct.rights Publikationen är skyddad av upphovsrätten. Den får läsas och skrivas ut för personligt bruk. Användning i kommersiellt syfte är förbjuden. sv
dct.rights Julkaisu on tekijänoikeussäännösten alainen. Teosta voi lukea ja tulostaa henkilökohtaista käyttöä varten. Käyttö kaupallisiin tarkoituksiin on kielletty. fi

Files in this item

Files Size Format View
visionba.pdf 1.833Mb PDF

This item appears in the following Collection(s)

Show simple item record