Browsing by Title
Now showing items 41-60 of 4247
-
(2018)De-anonymization is an important requirement in real-world V2X systems (e.g., to enable effective law enforcement). In de-anonymization, a pseudonymous identity is linked to a long-term identity in a process known as pseudonym resolution. For de-anonymization to be acceptable from political, social and legislative points of view, it has to be accountable. A system is accountable if no action by it or using it can be taken without some entity being responsible for the action. Being responsible for an action means that the responsible entity cannot deny its responsibility of or relation to an action afterwards. The main research question is: How can we achieve accountable pseudonym resolution in V2X communication systems? One possible answer is to develop an accountable de-anonymization service, which is compatible with existing V2X pseudonym schemes. The accountability can be achieved by making some entities accountable for the de-anonymization. This thesis proposes a system design that enables, i) fine-grained pseudonym resolution; ii) the possibility to inform the subject of the resolution after a suitable time delay; and iii) the possibility for the public to audit the aggregate number of pseudonym resolutions. A TEE is used to ensure these accountability properties. The security properties of this design are verified using symbolic protocol analysis.
-
(2019)The Baltic Sea is a vulnerable marine environment and susceptible to pollution. The situation is especially severe in the Gulf of Finland due to a large catchment area compared to the size of the Gulf. The north eastern Gulf of Finland has been described as one of the most contaminated areas of the entire Baltic Sea, with extensive pollution load via river Kymi in the past. Still today, the currents bring contaminants from the eastern part of the Gulf – the Neva estuary and the Bay of Viborg. The concentrations of V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Mo, Cd, Sb, Hg, Pb, Bi and La were studied in the surface sediments and three GEMAX cores. The vertical distribution revealed the temporal change in the metal accumulation. The spike in the Cs concentration, indicating the Chernobyl disaster in 1986, enabled the estimation of the accumulation of studied elements over time. The horizontal distribution maps based on the concentrations in the surface sediments enabled the discovery of the sites with most intense metal accumulation. Correlation coefficients showed the effect of carbon and sediment grain size in the distribution of metals. The comparison of the metal concentrations to the natural background levels and the Canadian sediment quality guidelines (SQGs) enabled the estimation of the degree of contamination of the area. The metal concentrations have declined during the last decades in the north eastern Gulf of Finland, indicating lower contamination input towards present day. However, in the oxidized Ravijoki core, the decline was not that obvious, probably due to metal scavenging by Fe and Mn oxides and bioturbation. The regional metal distribution was strongly affected by the grain size and carbon – most metals showed high positive correlations with carbon and finer sediment fraction. Mn was an exception, showing negative correlations with both carbon and clay, probably due to the Mn reduction at sites with high organic matter accumulation. The regional distribution pattern suggested main Cd pollution arriving from the eastern part of the Gulf. The distribution of Hg, Mo, Cu and Zn also suggested a possible source in the east. High concentrations of Hg, Pb and Cu were discovered in the outlets of river Kymi. According to the Canadian SQGs, the sediments in the north-eastern Gulf of Finland were contaminated. The situation is especially severe in the case of Zn – the higher reference value PEL, above which adverse biological effects frequently occur, was exceeded even in the oxidized Ravijoki sediments. The highest concentrations of the elements with defined SQGs (Cd, Cr, Zn, Cu, Hg, Pb and As) exceeded the lower reference values in the surface sediments, indicating that all these metals could, at least locally, pose a severe threat to benthic species.
-
(2020)Heart rate (HR) monitoring has been the foundation of many researches and applications in the field of health care, sports and fitness, and physiology. With the development of affordable non- invasive optical heart rate monitoring technology, continuous monitoring of heart rate and related physiological parameters is increasingly possible. While this allows continuous access to heart rate information, its potential is severely constrained by the inaccuracy of the optical sensor that provides the signal for deriving heart rate information. Among all the factors influencing the sensor performance, hand motion is a particularly significant source of error. In this thesis, we first quantify the robustness and accuracy of the wearable heart rate monitor under everyday scenario, demonstrating its vulnerability to different kinds of motions. Consequently, we developed DeepHR, a deep learning based calibration technique, to improve the quality of heart rate measurements on smart wearables. DeepHR associates the motion features captured by accelerometer and gyroscope on the wearable with a reference sensor, such as a chest-worn HR monitor. Once pre-trained, DeepHR can be deployed on smart wearables to correct the errors caused by motion. Through rigorous and extensive benchmarks, we demonstrate that DeepHR significantly improves the accuracy and robustness of HR measurements on smart wearables, being superior to standard fully connected deep neural network models. In our evaluation, DeepHR is capable of generalizing across different activities and users, demonstrating that having a general pre-trained and pre-deployed model for various individual users is possible.
-
(2024)The advancement of high-throughput imaging technologies has revolutionized the study of the tumor microenvironment (TME), including high-grade serous ovarian carcinoma (HGSOC), a cancer type characterized by genetic instability and high intra-tumor heterogeneity. HGSOC is often diagnosed at advanced stages and has a high relapse rate following initial treatment, presenting significant clinical challenges. Understanding the dynamic and complex tumor microenvironment in HGSOC is crucial for developing effective therapeutic strategies, as it includes various interacting cells and structures. Currently most methods are focusing on deciphering the TME on a single cell level, but the volume of the data poses a challenge in large scale studies. This thesis focuses on developing a comprehensive pipeline for accurate detection and phenotyping of immune cells within the TME using tissue cyclic immunofluorescence imaging. The proposed pipeline integrates Napari, an advanced visualization tool, and several existing computational methods to handle large-scale imaging datasets efficiently. The primary aim is to create Napari plugins for fast browsing and detailed visualization of these datasets, enabling precise cell phenotyping and quality control. Handling large images was resolved through the implementation of Zarr and Dask methodologies, enabling efficient data management. Key image processing methodologies include the use of the StarDist algorithm for cell segmentation, preprocessing steps for fluorescence intensity normaliza tion, and the Tribus tool for semi-automated cell type classification. In total, we annotated 976,082 single cells on three HGSOC samples originating from pre- or post-neoadjuvant chemotherapy tumor sections. The accurate annotation of immune sub-populations was enhanced by visual evaluation steps, addressing the limitations of the discussed methods. Accurately annotating dense tissue areas is crucial for describing the cellular composition of samples, particularly tumor-infiltrating immune populations. The results indicate that the proposed pipeline not only enhances the understanding of the TME in HGSOC but also provides a robust framework for future studies involving large-scale imaging data.
-
(2014)In this thesis we study the theoretical foundations of distributed computing. Distributed computing is concerned with graphs, where each node is a computing unit and runs the same algorithm. The graph serves both as a communication network and as an input for the algorithm. Each node communicates with adjacent nodes in a synchronous manner and eventually produces its own output. All the outputs together constitute a solution to a problem related to the structure of the graph. The main resource of interest is the amount of information that nodes need to exchange. Hence the running time of an algorithm is defined as the number of communication rounds; any amount of local computation is allowed. We introduce several models of distributed computing that are weaker versions of the well-established port-numbering model. In the port-numbering model, a node of degree d has d input ports and d output ports, both numbered with 1, 2, ..., d such that the port numbers are consistent. We denote by VVc the class of all graph problems that can be solved in this model. We define the following subclasses of VVc, corresponding to the weaker models: VV: Input and output port numbers are not necessarily consistent. MV: Input ports are not numbered; nodes receive a multiset of messages. SV: Input ports are not numbered; nodes receive a set of messages. VB: Output ports are not numbered; nodes broadcast the same message to all neighbours. MB: Combination of MV and VB. SB: Combination of SV and VB. This thesis presents a complete classification of the computational power of the models. We prove that the corresponding complexity classes form the following linear order: SB ⊈ MB = VB ⊈ SV = MV = VV ⊈ VVc. To prove SV = MV, we show that any algorithm receiving a multiset of messages can be simulated by an algorithm that receives only a set of messages. The simulation causes an additive overhead of 2∆ - 2 communication rounds, where ∆ is an upper bound for the maximum degree of the graph. As a new result, we prove that the simulation is optimal: it is not possible to achieve a simulation overhead smaller than 2∆ - 2. Furthermore, we construct a graph problem that can be solved in one round of communication by an algorithm receiving a multiset of messages, but requires at least ∆ rounds when solved by an algorithm receiving only a set of messages.
-
(2024)Message-oriented middleware (MOM) serves as the intermediary component between the nodes of a distributed system, facilitating their communication and data exchange. By decoupling the interconnected nodes of a system, MOM technologies enable scalable and fault-tolerant messaging, supporting real-time data streams, event-driven architectures and microservices communication. Given the increasing reliance on distributed computing and data-intensive applications, understanding the performance and operational characteristics of MOM technologies is paramount. This master's thesis investigates the comparative performance and operational aspects of two prominent MOM solutions, Apache Kafka and Apache Pulsar, through a systematic literature review (SLR). The key characteristics under inspection are throughput, latency, resource utilization, fault tolerance, security and operational complexity. This study offers a comprehensive analysis to aid informed decision-making in real-world deployment scenarios and augments the existing body of literature. The results of this SLR show that consensus on throughput and latency superiority between Kafka and Pulsar remains elusive. Pulsar demonstrates advantages in resource utilization and security, whereas Kafka stands out for its maturity and operational simplicity.
-
(2020)Traditional flat classification methods (e.g., binary, multiclass, and multi-label classification) seek to associate each example with a single class label or a set of labels without any structural dependence among them. Although, there are some problems in which classes can be divided or grouped into subclasses or superclasses respectively. Such a scenario demands the application of methods prepared to deal with hierarchical classification. An algorithm for hierarchical classification uses the information related to structure present in the class hierarchy and then improves the predictive performance . The freedom to perform a more generic classification, but with higher reliability, gives the process a greater versatility. Several studies have shown that, in solving a hierarchical classification problem, flat models are mostly overcome by hierarchical ones, regardless of the approach – local (including its derivations) or global – chosen. This thesis aims to compare the most popular hierarchical classification methods (local and global) empirically, reporting their performance – measured using hierarchical evaluation indexes. To do so, we had to adapt the global hierarchical models to conduct single path predictions, starting from the root class and moving towards a leaf class within the hierarchical structure. Further, we applied hierarchical classification on data streams by detecting concept drift. We first study data streams, various types of concept drifts, and state-of-the-art concept drift detection methods. Then we implement Global-Model Hierarchical Classification Naive Bayes (GNB) with three concept drift detectors: (i) Kolmogorov-Smirnov test, (ii) Wilcoxon test, and (iii) Drift Detection Method (DDM). A fixed-size sliding window was used to estimate the performance of GNB online. Finally, we must highlight that this thesis contributes to the task of automatic insect recognition.
-
(2024)Monolithic and microservice architectures represent two different approaches to building and organizing software systems. Monolithic architecture offers various advantages, such as simplicity in application deployment, smaller resource requirements, and lower latency. On the other hand, microservice architecture provides benefits in aspects including scalability, reliability, and availability. However, the advantages of each architecture may depend on various sectors especially when it comes to application performance and resource consumption. This thesis aims to provide insights into the differences in application performance and resource consumption between the two architectures by conducting a systematic literature review on the existing literature and research results in this regard and performing a benchmarking with various load tests on two applications of identical functionalities but using the two above mentioned different architectures. Results from the load tests revealed the applications in both software architectures delivered satisfactory outcomes. However, the test outputs indicated the microservice system outperformed by a high margin in nearly all test cases in aspects including throughput, efficiency, stability, scalability, and resource effectiveness. Based on the research outcomes from the reviewed literature, in general, monolithic design is more efficient and cost-effective for simple applications with small user loads. While microservice architecture is more advantageous for large and complex applications targeting high traffic and deployment in cloud environments. Nevertheless, the overall research results indicated both architectures have strengths and drawbacks in different aspects. Both architectures are used in many successful instances of applications. The differences between the two architectures in application performance and resource effectiveness depend on various factors, including application scale and complexity, traffic load, resource availability, and deployment environments.
-
(2023)In recent years, classical neural networks have been widely used in various applications and have achieved remarkable success. However, with the advent of quantum computing, there is a growing interest in quantum neural networks (QNNs) as a potential alternative to classical machine learning. In this thesis, we study the architectures of quantum and classical neural networks. We also investigate the performance of QNNs compared to classical neural networks from various aspects, such as vanishing gradient, trainability, expressivity. Our experiments demonstrate that QNNs have the potential to outperform classical neural networks in specific scenarios. While more powerful QNNs exhibit improved performance compared to classical neural networks, our findings also indicate that less powerful QNNs may not always yield significant improvements. This suggests that the effectiveness of QNNs in surpassing classical approaches is contingent on factors such as network architecture, optimization techniques, problem complexity.
-
(2018)Text classification, also known as text categorization, is a task to classify documents into predefined sets. As the prosperity of the social networks, a large volume of unstructured text is generated exponentially. Social media text, due to its limited length, extreme imbalance, high dimensionality, and multi-label characteristic, needs special processing before being fed to machine learning classifiers. There are all kinds of statistics, machine learning, and natural language processing approaches to solve the problem, of which two trends of machine learning algorithms are the state of the art. One is the large-scale linear classification which deals with large sparse data, especially for short social media text; the other is the active deep learning techniques, which takes advantage of the word order. This thesis provided an end-to-end solution to deal with large-scale, multi-label and extremely imbalanced text data, compared both the active trends and discussed the effect of balance learning. The results show that deep learning does not necessarily work well in this context. Well-designed large linear classifiers can achieve the best scores. Also, when the data is large enough, the simpler classifiers may perform better.
-
(2015)The purpose of this thesis is to compare different classification methods, on the basis of the results for accuracy, precision and recall. The methods used are Logistic Regression (LR), Support Vector Machines (SVM), Neural Networks (NN), Naive Bayes(NB) and a full Bayesian network(BN). Each section describes one of the methods, including the main idea of the methods used, the explanation of each one, the intuition underpinning each method, and their application to simple data sets. The data used in this thesis comprises 3 different sets used previously when learning the Logistic Regression model and the Support vector Machines one, then applied also to the Bayes counterparts, also to the Neural Networks model. The results show that the Bayesian methods are well suited to the classification task they are as good as their counterparts, some times better. While the Support Vectors Machine and Neural Networks are still the best all around, the Bayesian approach can have comparable performance, and, makes a good approximate to the traditional method's power. The results were Logistic Regression has the lowest performance of the methods for classification, then Naive Bayes, next Bayesian networks, finally Support Vector Machines and Neural Networks are the best.
-
(2013)This study presents some of the available methods for haplotype reconstruction and evaluates the accuracy and efficiency of three different software programs that utilize these methods. The analysis is performed on the QTLMAS XII common dataset, which is publicly available. The program LinkPHASE 5+, rule-based software, considers pedigree information (deduction and linkage) only. HiddenPHASE is a likelihood-based software, which takes into account molecular information (linkage disequilibrium). The DualPHASE software combines both of the above mentioned methods. We will see how usage of different available sources of information as well as the shape of the data affects the haplotype inference.
-
(2018)In software product line engineering (SPLE), parts of developed software is made variable in order to be able to build a whole range of software products at the same time. This is widely known to have a number of potential benefits such as saving costs when the product line is large enough. However, managing variability in software introduces challenges that are not well addressed by tools used in conventional software engineering, and specialized tools are needed. Research questions: 1) What are the most important requirements for SPLE tools for a small-to-medium sized organisation aiming to experiment with SPLE? 2) How well those requirements are met in two specific SPLE tools, Pure::Variants and Clafer tools? 3) How do the studied tools compare against each other when it comes to their suitability for the chosen context (a digital board game platform)? 4) How common requirements for SPL tools can be generalized to be applicable for both graphical and text-based tools? A list of requirements is first obtained from literature and then used as a basis for an experiment where support for each requirement is tried out with both tools. Then a part of an example product line is developed with both tools and the experiences reported on. Both tools were found to support the list of requirements quite well, although there were some usability problems and not everything could be tested due to technical issues. Based on developing the example, both tools were found to have their own strengths and weaknesses probably partly resulting from one being GUI-based and one textual. ACM Computing Classification System (CCS): (1) CCS → Software and its engineering → Software creation and management → Software development techniques → Reusability → Software product lines (2) CCS → Software and its engineering → Software notations and tools → Software configuration management and version control systems
-
(2022)Air ions can play an important role in new particle formation (NPF) process and consequently influence the atmospheric aerosols, which affect climate and air quality as potential cloud condensation nuclei. However, the air ions and their role in NPF have not been comprehensively investigated yet, especially in polluted area. To explore the air ions in polluted environment, we compared the air ions at SORPES site, a suburban site in polluted eastern China, with those at SMEAR II, a well-studied boreal forest site in Finland, based on the air ion number size distribution (0.8-42 nm) measured with Neutral Cluster and Air Ion Spectrometer (NAIS) during 7 June 2019 to 31 August 2020. Air ions were size classified into three size ranges: cluster (0.8-2 nm), intermediate (2-7 nm), and large (7-20 nm). Median concentration of cluster ions at SORPES (217 cm−3) was about 6 times lower than that at SMEAR II (1268 cm−3) due to the high CS and pre-existing particle loading in polluted area, whereas the median large ion concentration at SORPES (197 cm−3) was about 3 times higher than that of SMEAR II (67 cm−3). Seasonal variations of ion concentration differed with ion sizes and ion polarity at two sites. High concentration of cluster ions was observed in the evening in the spring and autumn at SMEAR II, while the cluster ion concentration remained at a high level all day in the same seasons. The NPF events occurred more frequently at SORPES site (SMEAR II 16% ; SORPES: 39%), and the highest values of NPF frequency at both sites were in spring ((SMEAR II: spring: 43%; SORPES: spring: 56%). During the noon time on NPF event day, the concentration of intermediate ions were 8-14 times higher than same ours on non-event days, indicating that can be used as an indicator for NPF in SMEAR II and SORPES. The median formation rate of 1.5 nm at SMEAR II were higher then that at SORPES, while higher formation rate of 3 nm ions were observed at SORPES. At 3 nm, the formation rate of charged particles was only 11% and 1.6% of the total rate at SMEAR II and SORPES respectively, which supports the current view that neutral ways dominate the new particle process in continental boundary. However, higher ratio between charged and total formation rate of 3 nm particle at SMEAR II indicates ion-induced nucleation can have a bigger contribution to NPF in clear area in comparison to polluted area. Higher median GR of 3-7 nm (SMEAR II: 3.1 nm h−1; SORPES: 3.7 nm h−1) and 7-20 nm (SMEAR II: 5.5 nm h−1; SORPES: 6.9 nm h−1) ions at SORPES were found in comparison to SMEAR II, suggesting the higher availability of condensing vapors at SORPES. This study presented a comprehensive comparison of air ions in completely different environments, and highlighted the need for long-term ion measurements to improve the understanding of air ions and their role in NPF in polluted area like eastern China
-
(Helsingin yliopistoHelsingfors universitetUniversity of Helsinki, 2008)The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.
-
(2020)Due to its exceptional thermal properties and irradiation resistance, tungsten is the material of choice for critical plasma-facing components in many leading thermonuclear fusion projects. Owing to the natural retention of hydrogen isotopes in materials such as tungsten, the safety of a fusion device depends heavily on the inventory of radioactive tritium in its plasma-facing components. The proposed methods of tritium removal typically include thermal treatment of massive metal structures for prolonged timescales. A novel way to either shorten the treatment times or lower the required temperatures is based performing the removal under an H-2 atmosphere, effectively exchanging the trapped tritium for non-radioactive protium. In this thesis, we employ molecular dynamics simulations to study the mechanism of hydrogen isotope exchange in vacancy, dislocation and grain boundary type defects in tungsten. By comparing the results to simulations of purely diffusion-based tritium removal methods, we establish that hydrogen isotope exchange indeed facilitates faster removal of tritium for all studied defect types at temperatures of 500 K and above. The fastest removal, when normalising based on the initial occupation of the defect, is shown to occur in vacancies and the slowest in grain boundaries. Through an atom level study of the mechanism, we are able to verify that tritium removal using isotope exchange depends on keeping the defect saturated with hydrogen. This study also works to show that molecular dynamics indeed is a valid tool for studying tritium removal and isotope exchange in general. Using small system sizes and spatially-parallelised simulation tools, we have managed to model isotope exchange for timescales extending from hundreds of nanoseconds up to several microseconds.
-
(2022)In this thesis, sputtering of several low- and high-index tungsten surface crystal directions are investigated. The molecular dynamics study is conducted using the primary knock-on atom method, which allows for an equal energy deposition for all surface orientations. The energy is introduced into the system on two different depths, on the surface and on a depth of 1 nm. Additionally to the sputtering yield of each surface orientation, the underlying sputtering process is investigated. Amorphous target materials are often used to compare sputtering yields of polycrystalline materials with simulations. Therefore, an amorphous surface is also investigated to compare it's sputtering yield and process with crystalline surface orientations. When the primary knock-on atom was placed on the surface all surface orientations had a cosine shaped angular distribution with little variation in the sputtering yield for most of the surface orientations. Linear collision sequences were observed to have a large impact on the sputtering yield when the energy was introduced deeper inside the material. In these linear collision sequences the recoils are traveling along the most close packed atom rows in the material. The distance from the origin of the collision cascade to the surface in the direction of the most close packed row is therefore crucial for the sputtering yield of the surface. Surface directions with high angles between this direction and the surface normal hence show a reduction in the sputtering yield. The amorphous material had a little lower sputtering yield than the crystalline materials when the primary knock-on atoms was placed on the surface whereas the difference rose into several orders of magnitude when the energy was given at 1 nm. It is impossible for linear collision sequences to propagate long distances in the amorphous material and therefore the angular distribution in both cases is cosine shaped. The amorphous material has no long range order and was therefore unable to reproduce the linear collision sequences, which are characteristic for the crystalline materials. The difference in the sputtering yield was hence up to several orders of magnitude as a result when the energy was introduced at 1 nm depth.
-
(2018)We introduce a new model for contingent convertibles. The write-down, or equity conversion, and default of the contingent convertible are modeled as states of conditional Markov process. Valuation formulae for different financial contracts, like CDS and different types of contingent convertibles, are derived. The Model can be thought of as an extension to reduced form models with an additional state. For practical applications, this model could be used for new type of contingent convertible derivatives in a similar fashion than reduced form models are used for credit derivatives.
-
(Helsingin yliopistoUniversity of HelsinkiHelsingfors universitet, 2006)The aim of this work was the assessment about the structure and use of the conceptual model of occlusion in operational weather forecasting. In the beginning a survey has been made about the conceptual model of occlusion as introduced to operational forecasters in the Finnish Meteorological Institute (FMI). In the same context an overview has been performed about the use of the conceptual model in modern operational weather forecasting, especially in connection with the widespread use of numerical forecasts. In order to evaluate the features of the occlusions in operational weather forecasting, all the occlusion processes occurring during year 2003 over Europe and Northern Atlantic area have been investigated using the conceptual model of occlusion and the methods suggested in the FMI. The investigation has yielded a classification of the occluded cyclones on the basis of the extent the conceptual model has fitted the description of the observed thermal structure. The seasonal and geographical distribution of the classes has been inspected. Some relevant cases belonging to different classes have been collected and analyzed in detail: in this deeper investigation tools and techniques, which are not routinely used in operational weather forecasting, have been adopted. Both the statistical investigation of the occluded cyclones during year 2003 and the case studies have revealed that the traditional classification of the types of the occlusion on the basis of the thermal structure doesn’t take into account the bigger variety of occlusion structures which can be observed. Moreover the conceptual model of occlusion has turned out to be often inadequate in describing well developed cyclones. A deep and constructive revision of the conceptual model of occlusion is therefore suggested in light of the result obtained in this work. The revision should take into account both the progresses which are being made in building a theoretical footing for the occlusion process and the recent tools and meteorological quantities which are nowadays available.
-
(2022)In recent years, the concept of Metaverse has become a popular buzzword in the media and different communities. In 2021, the company behind Facebook rebranded itself into Meta Platforms, inc. in order to match their new vision of developing the Metaverse. The Metaverse is becoming reality as intersecting technologies, including head-mounted virtual reality displays (HMDs) and non-fungible tokens (NFTs), have been developed. Different communities, such as media, researchers, consumers and companies have different perspectives on the Metaverse and its opportunities and problems. Metaverse technology has been researched thoroughly, while little to none research has been done on gray literature, i.e. non-scientific sources, to gain insight on the ongoing hype. The conducted research analyzed 44 sources in total, ranging from news articles to videos and forum discussions. The results show that people are seeing opportunities in Metaverse entrepreneurship in the changing career landscape. However, the visions of Meta Platforms, inc. also receive a fair amount of critique in the analyzed articles and threads. The results suggest that most of the consumers are only interested in a smaller subset of features than what is being marketed. The conducted research gives insight on how different sources are seeing the Metaverse and can therefore be used as a starting point for more comprehensive gray literature studies on the Metaverse. While making innovations to the underlying technology is important, studying people’s viewpoints is a requirement for the academia to understand the phenomenon and for the industry to produce a compelling product.
Now showing items 41-60 of 4247