1. Trang chủ
  2. » Công Nghệ Thông Tin

Big data optimization recent developments and challenges

471 47 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 471
Dung lượng 14,93 MB

Nội dung

Volume 18 Studies in Big Data Series Editor Janusz Kacprzyk Polish Academy of Sciences, Warsaw, Poland About this Series The series “Studies in Big Data” (SBD) publishes new developments and advances in the various areas of Big Data- quickly and with a high quality The intent is to cover the theory, research, development, and applications of Big Data, as embedded in the fields of engineering, computer science, physics, economics and life sciences The books of the series refer to the analysis and understanding of large, complex, and/or distributed data sets generated from recent digital sources coming from sensors or other physical instruments as well as simulations, crowd sourcing, social networks or other internet transactions, such as emails or video click streams and other The series contains monographs, lecture notes and edited volumes in Big Data spanning the areas of computational intelligence incl neural networks, evolutionary computation, soft computing, fuzzy systems, as well as artificial intelligence, data mining, modern statistics and Operations research, as well as self-organizing systems Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output More information about this series at http://​www.​springer.​com/​series/​11970 Editor Ali Emrouznejad Big Data Optimization: Recent Developments and Challenges 1st ed 2016 Editor Ali Emrouznejad Aston Business School, Aston University, Birmingham, UK ISSN 2197-6503 e-ISSN 2197-6511 ISBN 978-3-319-30263-8 e-ISBN 978-3-319-30265-2 DOI 10.1007/978-3-319-30265-2 Library of Congress Control Number: 2016933480 © Springer International Publishing Switzerland 2016 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG Switzerland Preface The increased capacity of contemporary computers allows the gathering, storage and analysis of large amounts of data which only a few years ago would have been impossible These new data are providing large quantities of information, and enabling its interconnection using new computing methods and databases There are many issues arising from the emergence of big data, from computational capacity to data manipulation techniques, all of which present challenging opportunities Researchers and industries working in various different fields are dedicating efforts to resolve these issues At the same time, scholars are excited by the scientific possibilities offered by big data, and especially the opportunities to investigate major societal problems related to health, privacy, economics, business dynamics and many more These large amounts of data present various challenges, one of the most intriguing of which deals with knowledge discovery and large-scale data mining Although these vast amounts of digital data are extremely informative, and their enormous possibilities have been highlighted on several occasions, issues related to optimization remain to be addressed For example, formulation of optimization problems of unprecedented sizes (millions or billions of variables) is inevitable The main objective of this book is to provide the necessary background to work with big data by introducing some novel optimization algorithms and codes capable of working in the big data setting as well as introducing some applications in big data optimization for interested academics and practitioners, and to benefit society, industry, academia and government To facilitate this goal, chapter “ Big Data:​ Who, What and Where?​ Social, Cognitive and Journals Map of Big Data Publications with Focus on Optimization ” provides a literature review and summary of the current research in big data and large-scale optimization In this chapter, Emrouznejad and Marra investigate research areas that are the most influenced by big data availability, and on which aspects of large data handling different scientific communities are working They employ scientometric mapping techniques to identify who works on what in the area of big data and large-scale optimization problems This chapter highlights a major effort involved in handling big data optimization and large-scale data mining which has led to several algorithms that have proven to be more efficient, faster and more accurate than earlier solutions This is followed by a comprehensive discussion on setting up a big data project in chapter “ Setting Up a Big Data Project:​ Challenges, Opportunities, Technologies and Optimization ” as discussed by Zicari, Rosselli, Ivanov, Korfiatis, Tolle, Niemann and Reichenbach The chapter explains the general value of big data analytics for the enterprise and how value can be derived by analysing big data Then it introduces the characteristics of big data projects and how such projects can be set up, optimized and managed To be able to choose the optimal big data tools for given requirements, the relevant technologies for handling big data, such as NoSQL and NewSQL systems, in-memory databases, analytical platforms and Hadoop-based solutions, are also outlined in this chapter In chapter “ Optimizing Intelligent Reduction Techniques for Big Data ”, Pop, Negru, Ciolofan, Mocanu, and Cristea analyse existing techniques for data reduction, at scale to facilitate big data processing optimization The chapter covers various areas in big data including: data manipulation, analytics and big data reduction techniques considering descriptive analytics, predictive analytics and prescriptive analytics Cyber-Water cast study is also presented by referring to: optimization process, monitoring, analysis and control of natural resources, especially water resources to preserve the water quality Li, Guo and Chen in the chapter “ Performance Tools for Big Data Optimization ” focus on performance tools for big data optimization The chapter explains that many big data optimizations have critical performance requirements (e.g., real-time big data analytics), as indicated by the velocity dimension of 4Vs of big data To accelerate the big data optimization, users typically rely on detailed performance analysis to identify potential performance bottlenecks To alleviate the challenges of performance analysis, various performance tools have been proposed to understand the runtime behaviours of big data optimization for performance tuning Further to this, Valkonen , in chapter “ Optimising Big Images ”, presents a very good application of big data optimization that is used for analysing big images Real-life photographs and other images, such as those from medical imaging modalities, consist of tens of million data points Mathematically based models for their improvement—due to noise, camera shake, physical and technical limitations, etc.—are moreover often highly non-smooth and increasingly often non-convex This creates significant optimization challenges for application of the models in quasi-real-time software packages, as opposed to more ad hoc approaches whose reliability is not as easily proven as that of mathematically based variational models After introducing a general framework for mathematical image processing, this chapter presents the current state-of-the-art in optimization methods for solving such problems, and discuss future possibilities and challenges As another novel application Rajabi and Beheshti , in chapter “ Interlinking Big Data to Web of Data ”, explain interlinking big data to web of data The big data problem can be seen as a massive number of data islands, ranging from personal, shared, social to business data The data in these islands are becoming large-scale, never ending and ever changing, arriving in batches at irregular time intervals In this context, it is important to investigate how the linked data approach can enable big data optimization In particular, the linked data approach has recently facilitated accessibility, sharing and enrichment of data on the web This chapter discusses the advantages of applying the linked data approach, toward optimization of big data in the linked open data (LOD) cloud by: (i) describing the impact of linking big data to LOD cloud; (ii) representing various interlinking tools for linking big data; and (iii) providing a practical case study: linking a big data repository to DBpedia Topology of big data is the subject of chapter “ Topology, Big Data and Optimization ” as discussed by Vejdemo-Johansson and Skraba The idea of using geometry in learning and inference has a long history going back to canonical ideas such as Fisher information, discriminant analysis and principal component analysis The related area of topological data analysis (TDA) has been developing in the past decade, which aims to extract robust topological features from data and use these summaries for modelling the data A topological summary generates a coordinate-free, deformation invariant and a highly compressed description of the geometry of an arbitrary data set This chapter explains how the topological techniques are well suited to extend our understanding of big data In chapter “ Applications of Big Data Analytics Tools for Data Management ”, Jamshidi, Tannahill, Ezell, Yetis and Kaplan present some applications of big data analytics tools for data management Our interconnected world of today and the advent of cyber-physical or system of systems (SoS) are a key source of data accumulation—be it numerical, image, text or texture, etc SoS is basically defined as an integration of independently operating, non-homogeneous systems for a certain duration to achieve a higher goal than the sum of the parts Recent efforts have developed a promising approach, called “data analytics”, which uses statistical and computational intelligence (CI) tools such as principal component analysis (PCA), clustering, fuzzy logic, neuro-computing, evolutionary computation, Bayesian networks, data mining, pattern recognition, etc., to reduce the size of “big data” to a manageable size This chapter illustrates several case studies and attempts to construct a bridge between SoS and data analytics to develop reliable models for such systems Optimizing access policies for big data repositories is the subject discussed by Contreras in chapter “ Optimizing Access Policies for Big Data Repositories:​ Latency Variables and the Genome Commons ” The design of access policies for large aggregations of scientific data has become increasingly important in today’s data-rich research environment Planners routinely consider and weigh different policy variables when deciding how and when to release data to the public This chapter proposes a methodology in which the timing of data release can be used to balance policy variables and thereby optimize data release policies The global aggregation of publicly-available genomic data, or the “genome commons” is used as an illustration of this methodology Achieving the full transformative potential of big data in this increasingly digital and interconnected world requires both new data analysis algorithms and a new class of systems to handle the dramatic data growth, the demand to integrate structured and unstructured data analytics, and the increasing computing needs of massive-scale analytics Li , in chapter “ Big Data Optimization via Next Generation Data Center Architecture ”, elaborates big data optimization via next-generation data centre architecture This chapter discusses the hardware and software features of High Throughput Computing Data Centre architecture (HTC-DC) for big data optimization with a case study at Huawei In the same area, big data optimization techniques can enable designers and engineers to realize large-scale monitoring systems in real life, by allowing these systems to comply with real-world constrains in the area of performance, reliability and reliability In chapter “ Big Data Optimization Within Real World Monitoring Constraints ”, Helmholt and der Waaij give details of big data optimization using several examples of real-world monitoring systems Handling big data poses a huge challenge in the computer science community Some of the most appealing research domains such as machine learning, computational biology and social networks are now overwhelmed with large-scale databases that need computationally demanding manipulation Smart sampling and optimal dimensionality reduction of big data using compressed sensing is the main subject in chapter “ Smart Sampling and Optimal Dimensionality Reduction of Big Data Using Compressed Sensing ” as elaborated by Maronidis, Chatzilari, Nikolopoulos and Kompatsiaris This chapter proposes several techniques for optimizing big data processing including computational efficient implementations like parallel and distributed architectures Although Compressed Sensing (CS) is renowned for its capability of providing succinct representations of the data, this chapter investigates its potential as a dimensionality reduction technique in the domain of image annotation Another novel application of big data optimization in brain disorder rehabilitation is presented by Brezany, Štěpánková, Janatoá, Uller and Lenart in chapter “ Optimized Management of BIG Data Produced in Brain Disorder Rehabilitation ” This chapter introduces the concept of scientific dataspace that involves and stores numerous and often complex types of data, e.g primary data captured from the application, data derived by curation and analytics processes, background data including ontology and workflow specifications, semantic relationships between dataspace items based on ontologies, and available published data The main contribution in this chapter is applying big data and cloud technologies to ensure efficient exploitation of this dataspace, namely novel software architectures, algorithms and methodology for its optimized management and utilization This is followed by another application of big data optimization in maritime logistics presented by Berit Dangaard Brouer, Christian Vad Karsten and David Pisinge in chapter “ Big data Optimization in Maritime Logistics ” Large-scale maritime problems are found particularly within liner shipping due to the vast size of the network that global carriers operate This chapter introduces a selection of large-scale planning problems within the liner shipping industry It is also shown how large-scale optimization methods can utilize special problem structures such as separable/independent sub-problems and give examples of advanced heuristics using divide-andconquer paradigms, decomposition and mathematical programming within a large-scale search framework On more complex use of big data optimization, chapter “ Big Network Analytics Based on Nonconvex Optimization ” focuses on the use of network analytics which can contribute to networked big data processing Many network issues can be modelled as non-convex optimization problems and consequently they can be addressed by optimization techniques Gong, Cai, Ma and Jiao , in this chapter, discuss the big network analytics based on non-convex optimization In the pipeline of nonconvex optimization techniques, evolutionary computation gives an outlet to handle these problems efficiently Since network community discovery is a critical research agenda of network analytics, this chapter focuses on the evolutionary computation-based non-convex optimization for network community discovery Several experimental studies are shown to demonstrate the effectiveness of optimization-based approach for big network community analytics Large-scale and big data optimization based on Hadoop is the subject of chapter “ Large-Scale and Big Optimization Based on Hadoop ” presented by Cao and Sun As explained in this chapter, integer linear programming (ILP) is among the most popular optimization techniques found in practical applications, however, it often faces computational issues in modelling real-world problems Computation can easily outgrow the computing power of standalone computers as the size of problem increases The modern distributed computing releases the computing power constraints by providing scalable computing resources to match application needs, which boosts large-scale optimization This chapter presents a paradigm that leverages Hadoop, an open-source distributed computing framework, to solve a large-scale ILP problem that is abstracted from real-world air traffic flow management The ILP involves millions of decision variables, which is intractable even with the existing state-of-the-art optimization software package Further theoretical development and computational approaches in large-scale unconstrained optimization is presented by Babaie-Kafaki in chapter “ Computational Approaches in Large–Scale Unconstrained Optimization ” As a topic of great significance in nonlinear analysis and mathematical programming, unconstrained optimization is widely and increasingly used in engineering, economics, management, industry and other areas In many big data applications, solving an unconstrained optimization problem with thousands or millions of variables is indispensable In such situations, methods with the important feature of low memory requirement are helpful tools This chapter explores two families of methods for solving large-scale unconstrained optimization problems: conjugate gradient methods and limited-memory quasi-Newton methods, both of them are structured based on the line search This is followed by explaining numerical methods for large-scale non-smooth optimization (NSO) as discussed by Karmitsa in chapter “ Numerical Methods for Large-Scale Nonsmooth Optimization ” NSO refers to the general problem of minimizing (or maximizing) functions that are typically not differentiable at their minimizers (maximizers) NSO problems are in general difficult to solve even when the size of the problem is small and the problem is convex This chapter recalls two numerical methods, the limited memory bundle algorithm (LMBM) and the diagonal bundle method (DBUNDLE), for solving large-scale non-convex NSO problems Chapter “ Metaheuristics for Continuous Optimization of High-Dimensional Problems:​ State of the Art and Perspectives ” presents a state-of-the-art discussion of metaheuristics for continuous optimization of high-dimensional problems In this chapter, Trunfio shows that the age of big data brings new opportunities in many relevant fields, as well as new research challenges Among the latter, there is the need for more effective and efficient optimization techniques, able to address problems with hundreds, thousands and even millions of continuous variables In order to provide a picture of the state of the art in the field of high-dimensional continuous optimization, this chapter describes the most successful algorithms presented in the recent literature, also outlining relevant trends and identifying possible future research directions Finally, Sagratella discusses convergent parallel algorithms for big data optimization problems in chapter “ Convergent Parallel Algorithms for Big Data Optimization Problems ” When dealing with big data problems it is crucial to design methods able to decompose the original problem into smaller and more manageable pieces Parallel methods lead to a solution by concurrently working on different pieces that are distributed among available agents, so as to exploit the computational power of multi-core processors and therefore efficiently solve the problem Beyond gradient-type methods, which can of course be easily parallelized but suffer from practical drawbacks, recently a convergent decomposition framework for the parallel optimization of (possibly non-convex) big data problems was proposed Such framework is very flexible and includes both fully parallel and fully sequential schemes, as well as virtually all possibilities in between This chapter illustrates the versatility of this parallel decomposition framework by specializing it to different well-studied big data optimization problems such as LASSO, logistic regression and support vector machines training Ali Emrouznejad January 2016 Acknowledgments First among these are the contributing authors—without them, it was not possible to put together such a valuable book, and I am deeply grateful to them for bearing with my repeated requests for materials and revisions while providing the high-quality contributions I am also grateful to the many reviewers for their critical review of the chapters and the insightful comments and suggestions provided Thanks are also due to Professor Janusz Kacprzyk , the Editor of this Series, for supporting and encouraging me to complete this project The editor would like to thank Dr Thomas Ditzinger (Springer Senior Editor, Interdisciplinary and Applied Sciences & Engineering), Ms Daniela Brandt (Springer Project Coordinator, Production Heidelberg), Ms Gajalakshmi Sundaram (Springer Production Editor, Project Manager), Mr Yadhu Vamsi (in the Production team, Scientific Publishing Services Pvt Ltd., Chennai, India) for their excellent editorial and production assistance in producing this volume I hope the readers will share my excitement with this important scientific contribution to the body of knowledge in Big Data Ali Emrouznejad Genome-wide association studies (GWAS) Genomic Data Sharing (GDS) policy GLOBE repository GN (Girvan and Newan) benchmark network Goal oriented optimization Google Google Apps Google File System (GFS) Google Plus Graph based network notation Graph databases Graphics processing unit (GPU) GraphLab (machine learning) Greedy discrete particle swarm optimization algorithm (GDPSO) Greenplum MadSkills (relational model) Gremlin (query language) Gudhi (library) H Hadoop air traffic flow optimization dual decomposition method problem formulation -based solutions cluster topology Hadoop Distributed File System See Hadoop Distributed File System (HDFS) MapReduce programming model numeric results Hadoop Distributed File System (HDFS) data processing processing stack Hadoop Vaidya Hamming distance Harmony search algorithm (HSA) Hashing HBase columnar organization data structure Hessian algorithm Hestenes-Stiefel (HS) method HiBench (benchmark) Hierarchical Compressed Sensing (HCS) comparison with Principal Component Analysis robustness of High Performance Computing (HPC) High Throughput Computing Data Center architecture (HTC-DC) architecture of DC-level efficient programming framework key features of many-core data processing unit NVM based storage Pooled Resource Access Protocol (PRAP) History Graph Model HiTune Hive data warehouse querying Homology local HP Vertica (analytical platform) Huber-regularization Human Genome Project (HGP) Hybrid architectures, of big data project Hybrid conjugate gradient methods HyperGraphDB (graph database) I IBM big data characteristics big data optimization IBM ILOG CPLEX IBM Platform Symphony mainframe Netezza (analytical platform) Watson question answering system Image fusion Image segmentation Infimal convolution total variation (ICTV) Information commons, dynamic nature of In-memory databases Integer Linear Programming (ILP) Intel Intelligent reduction techniques co-occurrence frequencies hierarchical clustering multidimensional scaling structural coding Interior point methods Interlinking multimedia (iM) Inverse problems with non-linear operators regularization of Isolating neighborhood Isomap Iterative regularization Iterative Reweighting Algorithm (IRWA) J JavaPlex (software) jDElscop algorithm jDEsps algorithm Journal map JPlex (software) JSON (data store) document structure in K Kantorovich-Rubinstein discrepancy Key-value stores K-means clustering Knowledge discovery K-SVD algorithm L Large scale global optimization (LSGO) differential evolution and applications to GaDE algorithm jDElscop algorithm jDEsps algorithm memetic algorithms for Large scale monitoring systems (LSMS) Large-scale nonsmooth optimization, numerical methods for diagonal bundle method limited memory bundle algorithm notations and background results solvers test problems and parameters Large-scale unconstrained optimization, computational approaches in conjugate gradient methods CG-descent algorithm Dai-Liao (DL) method Dai-Yuan (DY) method Fletcher-Reeves (FR) method Hestenes-Stiefel (HS) method hybrid conjugate gradient methods Polak-Ribière-Polyak (PRP) method spectral conjugate gradient methods three-term conjugate gradient methods Newton method quasi-Newton methods limited-memory modified secant equations scaled steepest descent method LASSO (big data problem) Latency analysis, of genomic data release policies knowledge rights variables, as policy design tools Layer Recurrent Neural Network Levenberg-Marquardt scheme, iteratively regularized LFR (Lancichinetti, Fortunato, and Radicchi) benchmark network Life Cycle Resources (LCRs) Limited memory bundle algorithm (LMBM) aggregation convergence properties of line search matrix updating search direction stopping criterion Limited-memory quasi-Newton methods Linear Discriminant Projections (LDP) LINER-LIB, computational results using Liner Shipping Network Design Problem (LSNDP) container routing meta-heuristic for Linkage Query Writer (LinQuer) LinkBench Link Discovery Framework for Metric Spaces (LIMES) Linked Data approach LinkedIn Local Linear Embeddings (LLE) LODRefine (tool) Lustre (bid data application) M MAD (Magnetism, Agility, and Depth) system Magnetic resonance imaging (MRI) Mahout (machine learning algorithm) Mantri (performance analysis tool) Many-core data processing unit Maple Mapper MapReduce algorithms MapReduce v2.0 See YARN programming model Terasort benchmark Maritime logistics, big data optimization in bunker purchasing with contracts container vessel stowage plans mathematical model empty container repositioning path flow formulation future challenges to LINER-LIB, computational results using Liner Shipping Network Design Problem container routing meta-heuristic for vessel schedule recovery problem definitions mathematical model MarkLogic (document store) Matching Pursuit (MP) algorithm Mathematica Matlab (analytic tool) Mean-shift clustering Mean squared error (MSE) Memetic algorithm (MA) with LS chains approach MOS-based algorithms Message Passing Interface (MPI) paradigm Meyer’s G-norm Mixture Gaussian Models MLSoft algorithm Mode-based clustering Modified secant equations Modularity based model MOGA-Net (multiobjective genetic algorithm-based method) MongoDB (open source database) Monitoring big data related constraints to availability growth of available data interpreted information, abstraction level of optimizations and constraints, relationship between performance reliability temporal issues defined dike solutions within big data related constraints analysis oriented optimization data oriented optimization goal oriented optimization system architecture oriented optimization Morozov’s discrepancy principle Morse-Smale complex MOS-based algorithms Moving Object Data Model Multicommodity flow problem (MCF) Multi-Layer Perceptron (MLP) feed forward Multilevel Cooperative Coevolution (MLCC) framework Multi-objective enhanced firefly algorithm Multi-objective evolutionary algorithm based on decomposition (MOEA/D) Multi-objective optimization model, for community structure analytics dynamical model general model overlapping model signed model Multiple Trajectory Search (MTS) algorithm Multi-resolution model Multi-Stage Composite Genetic Algorithm (MSC-GA) N Nash equilibrium National Center for Biotechnology Information (NCBI) Neo4 J (graph database) Nesterov’s optimal gradient method (NESTA) Nesterov-regularization See Huber-regularization Network data sets artificial generated benchmark networks real-world networks websites NewSQL systems NFL (No Free Lunch) theory NFS (bid data application) Nonconvex optimization, big network analytics based on community structure analytics community discovery multi-objective optimization model qualitative community single objective optimization model critical issues of eminent properties of network evolutionary algorithms experimental exhibition graph based network notation network data sets artificial generated benchmark networks real-world networks websites optimization problems, tackling Non-convex regularisers, methods for Non-dominated neighbor immune algorithm Non-dominated sorting genetic algorithm (NSGA-II) Nonlinear Autoregressive Network with Exogenous Inputs (NARXNET) Neural Network Nonlinear dimensionality reduction Non-separable feasible sets, problems with Non-smooth geometric regularisers, for imaging Nonsmooth optimization (NSO) large-scale, numerical methods for diagonal bundle method limited memory bundle algorithm notations and background results solvers test problems and parameters Non-smooth problems with separable feasible sets Normalized Mutual Information (NMI) NoSQL (“Not only SQL”) data cleaning data handling document-oriented stores graph databases key-value stores wide column stores Null Space Property (NSP) NuoDB (NewSQL database) NVM based storage O Object-Relationship (O-R) Model OODA (Observe, Orient, Decide, and Act) Open Government Data (OGD) OpenRefine Open source Optical flow computation Optical interconnects Optimization See also individual entries defined problems, tackling Optimized dataspace management BRDI-Cloud motivation and system usage service-level architecture of system design data analysis and visualization services association rule mining sequential pattern mining data capture and processing model data model event-based data indoor treatment outdoor treatment use-cases scientific dataspace model e-Science life-cycle model relationships in Oracle Data Mining (relational model) Orthogonal Matching Pursuit (OMP) algorithm Overlay mapping P Pareto dominance Pareto front Pareto optimal set Pareto optimality Particle Swarm Optimization (PSO) Path flow formulation Pattern analysis, using Compressed Sensing Performance tools, for big data optimization AutoTune challenges to data analysis data collection data presentation Hadoop Vaidya HiTune Mantri need for PerfXPlain SONATA design considerations implementation details overall architecture of target users Starfish system Theia Turbo design considerations implementation details industrial use cases overall architecture of target users user cases of performance analysis runtime efficiency, improving system problems, diagnosing tuning runtime configurations PerfXPlain Persistence -based clustering circular coordinates cohomology diagrams, as features landscape Personal Genome Project (PGP) Phoenix pHom (package) Pig (program) PigMix (benchmark) Planetary Nervous System (PNS) Polak-Ribière-Polyak (PRP) method Pooled Resource Access Protocol (PRAP) Positron emission tomography (PET) Precision Preconditioning Predictive analytics Predictive Model Markup Language (PMML) Prescriptive analytics Primal-dual hybrid-gradient method (PDHGM) Primal-dual hybrid gradient method for non-linear operators (NL-PDHGM) Primal-dual semi-smooth Newton approach Principal Components Analysis (PCA) comparison with Hierarchical Compressed Sensing Protected Working Capacity Envelope (PWCE) Proximal mapping Q Qualitative community, defined Quasi-Newton methods limited-memory modified secant equations scaled Quicksort algorithm R R (analytic tool) Random feature selection (RFS) Random grouping (RG) RavenDB (document store) RDF-IA (linking tool) Recall Reddit (company) Redis (key-value store) Reeb graph Regularised inversion Reminiscence therapy Resource Description Framework (RDF) Resource virtualization Restricted Isometry Property (RIP) Return on Investment (ROI) Riak (key-value store) R-TDA (package) Rudin-Osher-Fatemi (ROF) model S SaaS (Software as a Service) SAIM (web interface) SAP (program) SAP HANA (in-memory database) SAS (analytic tool) SAX (data reduction technique) Scaled quasi-Newton methods Scale-free network SciDB (machine learning model) Science Citation Index (SCI) Scientific dataspace model e-Science life-cycle model relationships in Second-order optimization methods, for imaging Huber-regularization interior point methods non-convex regularisers, methods for primal-dual semi-smooth Newton approach Seismic tomography Self-tuning database system Semi-automatic interlinking SEMMA (Sample, Explore, Modify, Model and Assess) process Separable feasible sets non-smooth problems with smooth problems with Sequential pattern mining ShapeAccelArray/Field (SAAF) Sherman-Morrison Theorem Signal reconstruction Silk (interlinking software) Simple Time Stamping Simulated Annealing Single nucleotide polymorphisms (SNPs) Single objective optimization model, for community structure analytics modularity based model multi-resolution model Singular Value Decomposition (SVD) Sliding window embedding Small-world network Smart sampling data compressibility dimensionality reduction optimal number of reduced dimensions Smooth problems with separable feasible sets Snapshot Model Social Network Analysis (SNA) Social Return of Investment (SROI) Social space, mapping Software AG Software as a Service (SaaS) Solar energy forecasting SONATA (performance tool) design considerations implementation details data collection data loading optimization recommendation performance visualization overall architecture of target users user cases of performance analysis runtime efficiency, improving system problems, diagnosing tuning runtime configurations Sones (graph database) Space-Time composite SPARQL (query language) Sparse reconstruction dictionary design signal reconstruction sparsity and compressibility Sparsity Spatial databases, data manipulation challenges Spatial pyramids Spatio-temporal object-oriented data model Spectral conjugate gradient methods Split inexact Uzawa method Sprint (deliver service) SSB benchmark Standard Feed-Forward Neural Network Starfish system architecture of Statistical Workload Injector for MapReduce (SWIM) See WL suite, benchmark Statistics S3 (bid data application) Stochastic coordinate descent methods Stochastic programming Stochastic variation Stratified manifold learning Stratified space Stratum Support Vector Machines (SVM) Support vector regression model (SVR) SW algorithm Synthetic aperture radar (SAR) System architecture oriented optimization System of Systems (SoS) T Tabular stores Takens’ embedding See Sliding window embedding Temporal databases, data manipulation challenges Teradata Aster (analytical platform) Theia (visualization tool) Thematic Analysis Three-term conjugate gradient methods Tibco (commercial platform) Tikhonov regularization approach Time Delay Network T-Mobile (deliver service) Top-Down Specialization (TDS) Topological inference Topological skeleta Topology applications dynamics global local nonlinear dimensionality reduction visualization Mapper optimization software and limitations Total generalised variation (TGV) TPC-C/H/W/DS benchmark Tractography Transaction Processing Performance Council (TPC) Transshipment Traveling Salesman Problem (TSP) Trunk services Turbo design considerations implementation details JVM level runtime level system level industrial use cases overall architecture of target users Twister 2S-Ensemble U Unbalanced subcomponents Use cases of big data project data capture and processing model V Vector of Locally Aggregated Descriptors (VLAD) Verizon (deliver service) Vessel schedule recovery problem (VSRP) definitions mathematical model Voldemort (key-value store) VoltDB (in-memory database) W Wasserstein distances Wavelength Division Multiplexing (WDM) networks Web of Data–big data interlinking case study future directions of process tools Wide column stores Wind energy forecasting WL suite, benchmark Wolfe conditions X XML big data XML format based format Y YARN (Hadoop component) YCSB benchmark Z Zookeeper, configuration management ... (ed.), Big Data Optimization: Recent Developments and Challenges, Studies in Big Data 18, DOI 10.1007/978-3-31930265-2_2 Setting Up a Big Data Project: Challenges, Opportunities, Technologies and Optimization. .. Publishing Switzerland 2016 Ali Emrouznejad (ed.), Big Data Optimization: Recent Developments and Challenges, Studies in Big Data 18, DOI 10.1007/978-3-31930265-2_1 Big Data: Who, What and Where? Social,... Interlinking Big Data to Web of Data Enayat Rajabi and Seyed-Mehdi-Reza Beheshti Topology, Big Data and Optimization Mikael Vejdemo-Johansson and Primoz Skraba Applications of Big Data Analytics

Ngày đăng: 02/03/2019, 11:44

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN