Published on

COMP2310 - Exam Cheat Sheet

Table of Contents

Course stuff

Concurrency

If there is no external observer who can identify two events as being in strict temporal sequence then these two events are considered concurrent.

Concurrent programming abstraction is the study of interleaved execution sequences of the atomic instructions of sequential processes.

Sources of Concurrency - Slide 6

  • Overlapped I/O and computation
    • Overlap I/O with computation so CPU cycles aren't wasted
  • Multi-programming
    • Multiple jobs in a "job pool". Jobs get executed until it is interrupted by an external factor or an I/O task. As soon as it goes to I/O, the OS interrupts the job and starts another. This prevents wasted CPU time by keeping the CPU busy as long as there are processes ready to execute.
    • Without Multi-programming, as soon as a job leaves, the CPU becomes idle and waits for the task to return.
  • Multi-tasking
    • Tasks can interrupt already started ones before they finish. Result is many interleaved tasks. Multi-tasking doesn't require parallel execution of multiple tasks at exactly the same time.
  • Multi-processor systems
    • Multi-processor systems implement multiprocessing.
    • Multiple separate processors allow multiple programs to be run on each individual processor, independent on the others. Note that access to shared resources is still limited or controlled.
  • Parallel machines & Distributed OS
    • Parallel
      • Parallel computing is closely related to concurrent computing
      • Possible to have parallelism without concurrency (bit-level parallelism) and concurrency without parallelism (multi-tasking by time sharing)
      • Parallel computing relies on split up sub tasks that can be processed independently.
    • Distributed OS
      • Collection of independent, networked, communicating, and physically separate computational nodes
  • General network architectures

Concurrent Architectures - Slide 7

  • SISD (Single Instruction, Single Data)
  • SIMD (Single Instruction, Multiple Data)
  • MISD (Multiple Instruction, Single Data)
  • MIMD (Multiple Instruction, Multiple Data)

Concurrency and Chaos

Can lead to:

  • Non-deterministic phenomena
    • Use it only when underlying system is also non-deterministic
  • Non-observable system states
    • Use where precise execution sequence is irrelevant
  • Non-reproducible behaviour
    • Only use synchronization only where necessary

Levels of Concurrency

  • Slide 12

Concurrent Programming Abstraction

What appears sequential on a higher abstraction level, is usually concurrent at a lower abstraction level

  • Concurrent OS or hardware, not visible at higher programming level

What appears concurrent on a higher abstraction level, might be sequential at a lower abstraction level

  • Multi-processing system, processes interleaved on single sequential computing node

Interaction

  • No interaction between system components means we can analyse them individually as pure sequential programs.
    • Contention (Implicit interaction) -> Multiple concurrent execution
    • Communication (Explicit interaction) -> Explicit passing of information and/or explicit synchronization

Correctness

Time

  • Physical -> Durations explicitly considered: Real-time systems

  • Logical -> Seqence of interaction points considered only: Non-real-time systems

  • Correctness of concurrent non-real-time systems

    • Doesn't depend on clock speeds, execution times, delays
    • Doesn't depend on actual interleaving of concurrent processes
    • Holds true for all possible sequences of interaction points
  • Standard concepts of correctness - Slide 20

  • Partial Correctness

    • If a program terminates, given some input to the program, and produces output, then some (correct) output property is implied

(P(I)  (P(I) \; \land terminates (Program(I,O)))    Q(I,O)(Program(I,O))) \implies Q(I,O)

  • Total Correctness
    • Given some input, the program will terminate and output will have a property

Processes and Threads

  • Slides 24

  • Processes

    • Non-lightweight and mostly isolated
    • Do not share data
    • Kernel has full knowledge about all processes
  • Threads

    • Lightweight, share memory
    • Share data with each other
    • Inside and outside the OS
  • Processes or Threads

  • Threads -> A group of Processes

  • Thread switching and inter-thread communication can be more efficient that switching on a process level

PCBs

States

NameDescription
CreatedTask ready to run, not yet considered (waiting for admission)
ReadyReady to run, waiting for a CPU
RunningHolding a CPU and executing
BlockedNot ready to run (waiting for a resource)
SuspendedSwapped out of main memory (non-time critical)
  • Unix Processes - Slide 36

Non-Determinism

  • Slide - 45

  • By design -> Property of a computation that may have had more than one result

  • By interaction -> Property of the operation environment leading to different sequences of (concurrent) stimuli

  • Message Selective Synchronization -- Slide 52

  • Sources of Non-Determinism -- Slide 62

Critical Section Problem

  • Mutual Exclusion -> Instructions from critical sections of two or more processes must never be interleaved
  • No Deadlock -> If some processes are trying to enter their critical sections, one of them must eventually succeed
  • No Starvation -> If any process tries to enter its critical section, then one must eventually succeed

Info

  • concurrency:
    • nondeterministic composition of independently executing units (logical parallelism),
  • parallelism:
    • efficient execution of fractions of a complex task on multiple processing units (physical

parallelism)

Non-determinism

Inability to directly predict an outcome or result. Usually due to a lack of knowledge about initial conditions and current workings of a system.

Cores and Clock speed

  • More cores, slower clock speed

    • Pros
      • Multi-threading applications benefit
      • Cost effective way of increasing performance
      • Run more programs at once
    • Cons
      • Lower single threaded performance than a higher clock speed processor
        • Worse performance for linear programs (single core utilisation)
  • Fewer cores, higher clock speed

    • Pros
      • Better single threaded performance
      • Lower cost option
    • Cons
      • Fewer cores to split between applications
      • Not as strong multi-threading performance

Ada Stuff


Chapel Stuff