Skip to content

Commit 1a583d4

Browse files
committed
Updates
1 parent eda357b commit 1a583d4

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+2308
-807
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
# Design patterns
2+
3+
* there are 23 patterns in the book
4+
* the patterns include solutions which have developed or evolved over time
5+
* => they are not designs people initially tend to generate
6+
* they are solutions people have evolved into
7+
* ASIDE: it is no surprise that they often feel like "overkill" given this history
8+
* the patterns represent "common ways that objects can collobarate"
9+
* the goal of the patterns is to make your code more **reusable**
10+
* => my guess is that design pattern knowledge is most applicable during
11+
the "refactor" phase where you have a solution and are now trying to find a
12+
code arrangement to make that solution optimal.
13+
* design patterns are a bunch of solutions with known trade-offs
14+
* the patterns are
15+
> a record of experience in designing OO software
16+
* they are intended to give you a "leg up" when designing software
17+
* design areas not in the book
18+
* concurrency
19+
* any application domain specific patterns e.g. web, game, databases
20+
* real time programming
21+
* UI design
22+
* device drivers
23+
* the patterns in the book are all at a paricular level
24+
* not as low-level building blocks like linked-lists, hash tables etc.
25+
* not as high-level as an entire DSL for some application area
26+
* description of objects and classes designed to solve a general design problem in a particular context
27+
* code examples are in C++ and Smalltalk
28+
* the patterns are influenced by what can be done **easily** in those languages
29+
* e.g. if they used C they might have added patterns on "inheritance", "encapsulation",
30+
31+
Pros/cons of the patterns (from my POV)
32+
33+
* ++ if other devs on team are familiar with them then you can communicate architecture very quickly
34+
* ?? if other devs on team are not familiar with them?
35+
36+
The book has 2 parts
37+
38+
1. Chapters 1 & 2: describe what patterns are, how to use them
39+
2. Chapters 3 - 5: the catalog of patterns, divided
40+
* Purpose (three types)
41+
* creational
42+
* deals with how objects are created
43+
* two subtypes
44+
* Class
45+
* defers some part of object creation to subclasses
46+
* Object
47+
* defers some part of object creation another object
48+
* structual
49+
* deals with the composition of classes and objects
50+
* two subtypes
51+
* Class
52+
* use inheritance to compose classes
53+
* Object
54+
* describe ways to assemble objects
55+
* behavioral
56+
* the way objects interact and distribute responsibility
57+
* two subtypes
58+
* Class
59+
* use inheritance to describe algorithms and flow of control
60+
* Object
61+
* describe how a group of objects can cooperate to perform a task that they could not do individually
62+
* Scope
63+
* Class
64+
* static (fixed at compile time)
65+
* Object
66+
* dynamic (can be changed at runtime)
67+
68+
Structure of a pattern
69+
70+
* name
71+
* allows team to discuss the pattern
72+
* problem
73+
* description of when to use the pattern
74+
* sometimes a set of criteria that must be there before you should use it
75+
* solution
76+
* describes a general arrangement of classes that will solve the problem
77+
* describes their
78+
1. relationships
79+
1. responsibilitie
80+
1. collaborations
81+
* consequences
82+
* pros/cons of the solution
83+
* the trade-offs of the solution
84+
85+
86+
Strategy pattern example
87+
88+
* an object that represents an algorithm
89+
* useful when
90+
* the algorithm has complex data structures you awnt to hide
91+
* you want to replace the algorithm either statically or dynamically
92+
* there are a lot of variants of the algorithm
93+
94+
95+
They make an analogy to a playwrites who often re-use stories that have the same structure.
96+
97+
They mention a number of times that the objects in the pattersn are never found
98+
in the initial stages of design - they emerge when we are trying to make the
99+
design more flexible or reusable.
100+
101+
Design patterns can be considered "techniques for making my existing design
102+
more flexible and reusable". They are not "starting points for my design".
103+
104+
105+
* Each operation defined by an object has a signature
106+
* signature = operation name, the objects argument it takes as arguments and its return type
107+
* A set of sigatures is an _Interface_.
108+
* A _Type_ is a name used to denote a particular _Interface_.
109+
* An object can have many types
110+
* A type can be implemented by many different objects
111+
* Interfaces can contain other interfaces as subsets
112+
* type C is a _subtype_ of P if the interface of C fully contains the interface of P. type P is the supertype of C
113+
* We say that type C "inherits" from type P
114+
* Objects are known only through their interfaces
115+
* An interface says nothing about implementation
116+
117+
## Chapter 2: Case study - Lexi
+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+

docker/build-a-docker-image.md

+13-9
Original file line numberDiff line numberDiff line change
@@ -7,34 +7,38 @@
77
* The first thing a build process does is send the entire context (recursively) to the daemon.
88
* In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.
99

10+
The Docker daemon runs the instructions in the Dockerfile one-by-one,
11+
committing the result of each instruction to a new image if necessary, before
12+
finally outputting the ID of your new image. The Docker daemon will
13+
automatically clean up the context you sent
1014

11-
The Docker daemon runs the instructions in the Dockerfile one-by-one, committing the result of each instruction to a new image if necessary, before finally outputting the ID of your new image. The Docker daemon will automatically clean up the context you sent
15+
Note that each instruction is run independently, and causes a new image to be
16+
created - so `RUN cd /tmp` will not have any effect on the next instructions.
1217

13-
Note that each instruction is run independently, and causes a new image to be created - so RUN cd /tmp will not have any effect on the next instructions.
14-
15-
Whenever possible, Docker will re-use the intermediate images (cache), to accelerate the docker build process significantly.
18+
Whenever possible, Docker will re-use the intermediate images (cache), to
19+
accelerate the docker build process significantly.
1620

1721
* This is indicated by the Using cache message in the console output.
18-
* this also means you have to `apt-get update` and `apt-get install` in the same step or the image from the "update" will be cached and not chanaged when you change the install command
22+
* GOTCHA: this also means you have to `apt-get update` and `apt-get install` in the same step or the image from the "update" will be cached and not chanaged when you change the install command
1923

2024
## Steps
2125

2226
```sh
2327
docker images --all # see what is already installed
24-
docker pull ruby:2.3.1-slim
25-
# the first layer in this already existed so presumably is common to postgres
26-
# image (which I had already downloaded)
28+
29+
docker pull ruby:2.3.1-slim # optional - the build will pull for you if you need it
2730

2831
# build a new image from a Dockerfile in current working dir
2932

3033
# the docker daemon uses the cwd as the "context" for the build so the docker
3134
# client will copy the entire cwd contents to the daemon before build i.e. don't
32-
# build from `/`
35+
# build from `/`!!!
3336
docker build -t eoin-ruby-test-1 .
3437

3538
docker history eoin-ruby-test-1 # show layers history
3639
docker inspect eoin-ruby-test-1 |jq . # JSON dump of image metadata
3740

41+
TODO how to clean up layers form images that didn't build properly
3842
3943
# docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
4044
docker run -it --name eoins-container eoin-ruby-test-1 bash

docker/cheat-sheet.md

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
2+
docker run -v ~/DockerVolumes/postgres:/var/lib/postgresql/data -p 5432:5432 --name postgres_new2 postgres

docker/docker-compose.md

+8
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
2+
3+
GOTCHA: depends_on will not wait for db and redis to be “ready” before starting web - only until they have been started
4+
5+
6+
docker-compose up
7+
docker-compose exec --user postgres db psql
8+
docker-compose run web bundle exec rake db:setup

docker/docker-for-mac.md

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
# Docker for mac
2+
3+
* runs as a native Mac application
4+
* uses xhyve to virtualize the Docker Engine environment and Linux kernel-specific features for the Docker daemon.
5+
6+
7+
Xhyve
8+
9+
https://github.com/mist64/xhyve/
10+
11+
> The xhyve hypervisor is a port of bhyve to OS X. It is built on top of
12+
> Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in
13+
> userspace, and has no other dependencies. It can run FreeBSD and vanilla
14+
> Linux distributions and may gain support for other guest operating systems in
15+
> the future.
16+
17+
> bhyve is the FreeBSD hypervisor, roughly analogous to KVM + QEMU on Linux. It
18+
> has a focus on simplicity and being legacy free.
19+
20+
21+
At installation time, Docker for Mac provisions an HyperKit VM based on Alpine Linux, running Docker Engine. It exposes the docker API on a socket in /var/tmp/docker.sock. Since this is the default location where docker will look if no environment variables are set, you can start using docker and docker-compose without setting any environment variables
22+
23+
24+
You can't route network traffic between containers and the host when you are running "docker for mac"

docker/persistent-data.md

+45-6
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,14 @@
1-
Two ways to make data last longer than the container
1+
# Cheat sheet
2+
3+
```
4+
docker run -it -v ~/DockerVolumes/play:/play --name test2 debian /bin/bash
5+
```
6+
7+
Kitematic shows the "in container" path on its container homepage
8+
9+
# Background
10+
11+
Docker has two ways to make data last longer than the container
212

313
1. data volume
414
2. data volume container
@@ -11,19 +21,48 @@ Two ways to make data last longer than the container
1121
* data volume persists even if the container is deleted
1222
* data volume can be shared and re-used between containers
1323
* use `docker inspect` to see details of the volume
14-
* you can optionally mount a directory from the host machine into the volume
24+
* data volumes are NOT the same as mounting a directly from the host machine
25+
* you can optionally mount a directory from the host machine into the volume
26+
* changes to data volumes are made directly (not through unionfs)
27+
* they persist even if the container is deleted
1528

16-
There is a complication on Mac and Windows because the "docker host" is actually a linux VM.
17-
* docker mounts `/Users` from the mac into `/Users` on the docker host so you can share dirs from /Users with containers
29+
* There is a complication on Mac and Windows because the "docker host" is actually a linux VM.
30+
* docker mounts `/Users` from the mac into `/Users` on the docker host so you can share dirs from /Users with containers
1831
> Data volumes provide the best and most predictable
1932
> performance. This is because they bypass the storage
2033
> driver and do not incur any of the potential
2134
> overheads introduced by thin provisioning and
2235
> copy-on-write. For this reason, you may want to place
2336
> heavy write workloads on data volumes.
2437
25-
* writing into the container can have perf impacts depending on the storage
26-
driver being used by the container
2738

39+
* docker does not automatically delete a volume when you remove a container
40+
* it will also not "garbage collect" volumes that are no longer referenced by any container
41+
42+
* mounting a host (note: linux is the host, not mac) directory is just a special case of creating a volume. Instead of the volume being in something like `/var/lib/docker/volumes/fac362...80535/_data` it is an existing dir on your linux-host.
43+
44+
Where are volumes stored?
45+
46+
* volumes are stored on the linux-host filesystem (not in the unionfs files that the containers use)
47+
48+
How do I see what volumes are on a docker host?
49+
50+
51+
* writing into the container can have perf impacts depending on the storage driver being used by the container
52+
53+
The dockerVM on mac "auto shares" a number of mac filesystem dirs with itself
54+
55+
```
56+
/Users
57+
/Volumes
58+
/tmp
59+
/private
60+
```
2861

2962
# Data volume containers
63+
64+
> If you have some persistent data that you want to share between containers,
65+
> or want to use from non-persistent containers, it’s best to create a named Data
66+
> Volume Container, and then to mount the data from it.
67+
68+
Create a container which has the job of mounting the data volume (from the linux-host) and making that data available to other containers

elixir/attributes.exs

+15
Original file line numberDiff line numberDiff line change
@@ -35,3 +35,18 @@ end
3535

3636
Foo.show_metadata_constants
3737
Foo.show_changed_metadata_constants
38+
39+
#################################################
40+
#################################################
41+
#################################################
42+
# chris mccord uses this pattern in his code - why does he assign ?
43+
44+
defmodule Thing do
45+
phoenix_path = Application.app_dir(:phoenix, "priv/static/phoenix.js")
46+
reload_path = Application.app_dir(:phoenix_live_reload, "priv/static/phoenix_live_reload.js")
47+
@external_resource phoenix_path
48+
@external_resource reload_path
49+
@phoenix_js File.read!(phoenix_path)
50+
51+
# ...
52+
end

0 commit comments

Comments
 (0)