Semantics, Typing, and Paradigms in Programming Languages
Operational Semantics
Describes the meaning of a program in terms of its execution on an abstract machine or step-by-step behavior. It is concerned with how a program executes (correctness).
Denotational Semantics
Maps programs to mathematical objects (functions, sets, etc.) to describe their meaning. It focuses on what a program computes, abstracting away how it executes.
Axiomatic Semantics
Specifies the behavior of programs using logical assertions (preconditions and postconditions). It is used to prove program correctness without focusing on how it runs or what it computes.
Attributes for Names & Binding Time
- Type/Scope: Compile time
- Lifetime: Load time for global variables and runtime for dynamically allocated variables
- Location: Global/static variables fixed memory at load, local variables (stack), dynamic variables (heap) runtime
- Value: Compile time for constants and runtime for computed variables
Environment
Binds names to locations.
Language Typing
- Java: Static + Strong
- Haskell: Static + Strong
- Erlang: Dynamic + Strong
- Prolog: Dynamic + Weak
Tail Recursion
Constant memory usage, takes one stack frame.
Why Tail Recursion Doesn’t Always Help Memory Usage
Lazy evaluation defers computation (added to stack) until values are needed. Tail recursion doesn’t reduce memory usage with lazy evaluation.
Hindley-Milner Inference
Automatic type inference. Feels like dynamic typing (no need to declare types).
Referential Transparency
Functions depend only on inputs (not globals or hidden states). Easier to debug and reason about compared to opaque programs.
Derive Type Declarations
Analyze the function arguments and return types. Use polymorphic types (a, b, etc.) when arguments can be of any type. Consider constraints (e.g., eq for elem) if the function requires specific operations on types.
Real Prolog Interpreter
Is deterministic.
Higher-Order Functions
One attribute of functional programming languages that is finding its way into more conventional languages.
High-Performance and Interpreted
Two of Java’s design goals seem to conflict.
Recursion
Solves a problem by breaking it into smaller subproblems of the same type. Each recursive function call consumes stack memory and stores intermediate states. Risks: May lead to stack overflow when the recursion depth is too high (unless optimized with memoization).
Runtime Considerations
Recursive function calls have overhead due to storing intermediate states. This can result in exponential runtime for certain problems.
Iterative (Bottom-Up) Approach
Uses loops, avoiding overhead of function calls. Saves memory and runtime, making it preferable for large inputs or high recursion depth. Avoids stack usage and redundant computations.
Indirect Recursion
Two or more functions call each other in a loop.
Cons
Complex call chains, harder to debug. Example: f1 → f2 → f1.
Imperative Programming
Focuses on how a task is performed using a series of commands. Emerging problems arise from: explicit state and control structures (variables, loops, etc.). State stored with variables like Java memory locations (von Neumann architecture).
Key Feature Comparison
Java: State and communication are explicit (imperative focus) vs. recursion (declarative focus).
Functional Programming
No memory management/mutable states (clarity in problem solving leads to elegant solutions). Optimized for parallel execution (critical as Moore’s law slows). Stateless & side-effect-free nature allows for easier parallel execution.
Concise & Clear Programs
Due to referential transparency + abstraction from von Neumann details (improves programmer productivity & reduces complexity).
Easy to Test & Debug
Behavior is solely dependent on input arguments (with predictable functions) & no hidden states. Simplifies testing by removing stateful errors. Examples: Quicksort: qsort [] = []; qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
.
Why Do We Need Tail Recursion in Concurrency?
To prevent infinite loops & take up memory.
Performance Trade-Offs
Can be less efficient in time + space when compared to imperative programs (recursion + immutability leads to overhead).
Functional Programming Paradigm
Computation is evaluation of mathematical functions. Mapping input domains to output ranges. Programs are compositions of functions; one function calls another in a hierarchical structure. Code reuse becomes challenging due to unpredictable states.
Statelessness
Functional programs do not maintain/modify global/local states; effects don’t persist after execution. Avoids side effects: state dependencies in large programs add complexity.
Referential Transparency
Any (sub)expression can be replaced by its value without changing overall computation. Simplifies reasoning about code, enables optimizations like caching results for repeated evaluations.
Access Modifiers
- Private: Used only within a class. Accessed through methods within the class.
- Static: All instances share the same variable (class-level/shared).
- Final: Cannot be reassigned/overridden. Protects state.
- Public: Accessible from anywhere.
- Protected: Accessible by subclasses or same package.
Inheritance
Extension of a class with inherited fields & methods. super
: to call parent class methods or constructor.
Overriding
Allows methods with the same name + signature to replace parent methods in child classes.
Polymorphism
Enables parent references to point to child objects. Allows overriding methods in child classes.
Haskell
Named after Haskell Curry (purely functional programming language rooted in logic). Pure functional programming: no side effects; computations rely solely on function inputs, contrasting with Clojure & Lisp, which allow mutable state. Types are declared explicitly & checked at compile time to prevent harmful type conversions. Hindley-Milner type inference: a system to automatically deduce types. Evaluates arguments only when necessary (lazy evaluation), allowing efficient handling of potentially infinite data structures. Referential transparency: ensures the same function call always produces the same result, simplifying reasoning & debugging. Modern applications: suited for symbolic computations, concurrent processes, & scenarios prioritizing correctness over raw performance.