Lecture 1 – Parallel and distributed computing
What is big data? no fixed definition, concept changes over time. But, data becomes big data when
it becomes too large or too complex to be analysed with traditional data analysis software. This
can be noticed when analysis becomes too slow or too unreliable, when systems become unresponsive,
or when day-to-day businesses are impacted.
Changes over time:
- In the past, storage was expensive. Now, storage is relatively cheap and easy
- In the past, only the most crucial data was preserved. Now, companies and governments
preserve huge amounts of data and more data is generated (i.e. customer information,
historical purchases, GPS trajectories, etc.)
- In the past, companies only consulted historical data, and did not analyse it. Now, more
companies and governments rely on data analysis such as event prediction and fraud detection.
Data analysis is computationally intensive and expensive. Examples:
- Online recommender systems: require instant results
- Frequent pattern mining: time complexity exponential in the number of different items,
independent of the number of transactions (e.g., market basket analysis)
- Multi-label classification: exponential number of possible combinations of labels to be
assigned to a new sample (e.g., Wikipedia tagging)
- Subspace clustering: exponential number of possible sets of dimensions in which clusters
could be found (e.g., customer segmentation)
The three aspects of big data:
- Volume – the actual quantity of data that is gathered
o Number of events logged, number of transactions, number of attributes, descriptions
- Variety – the different types of data that is gathered
o Some attributes may be numeric, other textual, structured vs. unstructured, irregular
timing (sensor data has regular time intervals, accompanying log data are irregular)
- Velocity – the speed at which new data is coming in and the speed at which data must be
handled
o May result in irrecoverable bottlenecks
Solutions for big data: invest in hardware and use intelligent algorithms.
The goal of parallel computing: leveraging the full potential of your multicore multiprocessor
multicomputer system. The goal or parallel processing is to reduce computation time.
Embarrassingly parallel – when an algorithm is split into smaller parts that can run in parallel. Each
chunk runs simultaneously, speeding up the process (linear speedup). When parallel processing is
done, the chunks are re-joined. Data processing is usually embarrassingly parallel, assuming there is
no communication necessary between the workers. Example:
- Define a function that takes as input the html source file of a web page and returns the main
article’s text
o Instead of applying the function sequentially to each web page, you can define several
workers that each apply the function to a large number of web pages simultaneously
, Research Skills – Big Data 2
Linear speedup – executing two tasks in parallel on two cores, should halve the running time.
Speedup is the ratio between serial and parallel execution time.
Task parallelism – multiple tasks are applied on the same data in parallel.
Data parallelism – a calculation is performed in parallel on many different data chunks.
Analysis of parallel algorithms
- Parallelization can speed up the execution time of an algorithm, but it does not change its
complexity
- When analysing parallel algorithms one is mostly interested in the speedup that is gained
- Typically one has to take into account the overhead of the parallelization, but the optimal
speedup one can get is linear, i.e. if work is divided over 4 processors, then execution time
would be at most 4 times faster.
- A lot of operations involving large amounts of data are embarrassingly parallel. However, in
general there are data or procedural dependencies preventing a linear speedup
A linear speedup is often not realized, because often a part of the algorithm cannot be parallelized.
Assume:
- T is the total serial execution time
- P is the proportion of the code that can be parallelized
- S is the number of parallel processes
Data dependencies preventing you to realize a
linear speedup. The case where the input of a
segment of code, depends on the output of
another piece of code.
Branches prevent you to realize a linear
speedup. The case where the execution of a
segment of code, depends on a logical
condition.
Les avantages d'acheter des résumés chez Stuvia:
Qualité garantie par les avis des clients
Les clients de Stuvia ont évalués plus de 700 000 résumés. C'est comme ça que vous savez que vous achetez les meilleurs documents.
L’achat facile et rapide
Vous pouvez payer rapidement avec iDeal, carte de crédit ou Stuvia-crédit pour les résumés. Il n'y a pas d'adhésion nécessaire.
Focus sur l’essentiel
Vos camarades écrivent eux-mêmes les notes d’étude, c’est pourquoi les documents sont toujours fiables et à jour. Cela garantit que vous arrivez rapidement au coeur du matériel.
Foire aux questions
Qu'est-ce que j'obtiens en achetant ce document ?
Vous obtenez un PDF, disponible immédiatement après votre achat. Le document acheté est accessible à tout moment, n'importe où et indéfiniment via votre profil.
Garantie de remboursement : comment ça marche ?
Notre garantie de satisfaction garantit que vous trouverez toujours un document d'étude qui vous convient. Vous remplissez un formulaire et notre équipe du service client s'occupe du reste.
Auprès de qui est-ce que j'achète ce résumé ?
Stuvia est une place de marché. Alors, vous n'achetez donc pas ce document chez nous, mais auprès du vendeur lisajanssen1. Stuvia facilite les paiements au vendeur.
Est-ce que j'aurai un abonnement?
Non, vous n'achetez ce résumé que pour €5,48. Vous n'êtes lié à rien après votre achat.