Compare commits

..

48 Commits

Author SHA1 Message Date
5ca06e59ae accounting updated as of 8/30 2025-09-03 22:38:37 -04:00
71ac6d98b7 July money 2025-07-20 11:51:57 -04:00
fd695bff47 things for taxes 2025-03-26 20:52:08 -04:00
5f83135677 More accounting stuff 2025-03-15 13:45:54 -04:00
943d836da0 more accounting 2025-03-02 16:06:36 -05:00
7af2dc7558 Add more things! 2025-01-13 08:47:11 -05:00
b1291c3b52 more 2024-12-09 22:47:22 -05:00
f309f1f5e1 side notes 2024-12-02 10:16:01 -05:00
1618e67ffa Adding wedding finances 2024-12-02 10:13:53 -05:00
9ce2aad3bc Added backlog-refinement and misc 2024-11-25 11:07:54 -05:00
89c592d671 Test plan 2024-11-17 08:34:59 -05:00
Adam Mohammed
181b9a7bc3 Updates 2024-11-12 10:13:44 -05:00
63fe9cf740 Test plan 2024-09-18 17:39:12 -04:00
09d18e5d3a Permissions docs 2024-09-18 12:34:32 -04:00
1da31679cb Add the rambling on capability systems 2024-07-30 08:55:57 -04:00
1ddc9f19f1 Move to incidents subfolder 2024-07-30 08:55:56 -04:00
a26bb2a1a4 More stuff 2024-07-30 08:55:44 -04:00
Adam Mohammed
716c552a58 More updates 2024-05-09 12:25:24 -04:00
Adam Mohammed
8eb5363073 do it 2024-05-03 13:34:50 -04:00
8385a902df Tuesday stretches 2024-04-24 03:11:43 +00:00
a94b9d51ac Update for sunday stretches 2024-04-21 21:04:35 -04:00
bac60a72a8 Started mobility program 2024-04-20 22:27:48 -04:00
10adda26b7 make folder for programming languages 2024-04-20 10:25:40 -04:00
f752c35621 move work design under equinix 2024-04-20 10:24:33 -04:00
4daae4e756 more cleanup 2024-04-20 10:23:31 -04:00
b4f4565894 Cleaning up directory 2024-04-20 10:21:42 -04:00
d6afd9f472 Add some testing for linux network namespaces 2024-04-20 10:15:10 -04:00
0e9dbebd6a Some new design docs 2024-04-20 10:13:39 -04:00
Adam Mohammed
12cf3967ee moar 2023-10-30 15:31:17 -04:00
6f8d6220fa Move equinix watch things 2023-10-11 09:16:47 -04:00
Adam Mohammed
1ee5ac51e7 add equinix watch 2023-10-04 11:52:55 -04:00
d2f7ac8aef more 2023-09-26 10:27:22 -04:00
c9ad391873 starting my slipbox 2023-09-26 10:27:22 -04:00
Adam Mohammed
815315bfb1 cleanup 2023-09-19 11:09:27 -04:00
b8af13c8f4 more 2023-08-17 10:40:22 -04:00
fcf7195e97 more 2023-08-16 15:49:23 -04:00
cef3a19c35 moar 2023-08-16 10:35:31 -04:00
Adam Mohammed
61a7438251 doit 2023-08-15 17:31:19 -04:00
Adam Mohammed
82dcab9f9d Big dump 2023-07-24 11:13:50 -04:00
5f13efe08e Add some home projects 2023-07-04 20:07:24 -04:00
efb5e7b419 Update 7/4 2023-07-04 13:09:27 -04:00
41cbab3230 Adding recipes 2023-06-25 19:59:02 -04:00
fedfeb03f5 Add infratographer notes from friday 2023-06-16 17:14:05 -04:00
5e795e3519 Add some common lisp bits 2023-06-12 22:09:53 -04:00
47dc5e09ad Notes 6/7/23 2023-06-07 22:20:32 -04:00
Adam Mohammed
b1e8dea50e more notes 2023-06-01 16:12:31 -04:00
cac252eabc Update the readme.org 2023-05-12 09:59:06 -04:00
03694ee62d Added more about ruby3 upgrades 2023-05-12 09:57:29 -04:00
46 changed files with 5184 additions and 56 deletions

429
accounting/july.ledger Normal file
View File

@@ -0,0 +1,429 @@
~ Monthly
expenses:mortgage $2565.89
expenses:food $500.00
expenses:utilities $639.61
assets:checking
2025/07/01 * Opening Balances
assets:checking $31969.37
assets:savings:e-trade $149105.87
assets:savings:pnc $49105.92
equity:opening balance
2025/07/01 * NEWEGG MARKETPLACE
expenses:shopping:newegg $146.26
expenses:shopping:newegg $21.08
assets:checking
2025/07/01 * NYTIMES
expenses:subscriptions:nytimes $23.00
assets:checking
2025/07/01 * MORTGAGE SERV CT MTG PAYMT
expenses:mortgage $2565.89
assets:checking
2025/07/02 * NEWEGG MARKETPLACE
expenses:shopping:newegg $34.04
expenses:shopping:newegg $125.06
assets:checking
2025/07/02 * THIRD TANK LLC
expenses:shopping:books $49.95
assets:checking
2025/07/02 * WAWA
expenses:food:takeout $18.10
assets:checking
2025/07/03 * EQUINIX SERVICES DIRECT DEP
assets:checking $3722.38
income:equinix
2025/07/07 * Prime Video
expenses:entertainment:movie $4.02
assets:checking
2025/07/07 * AMAZON PRIME*N34V921M1
expenses:shopping:amazon $147.34
assets:checking
2025/07/07 * OWOWCOW CHALFONT
expenses:food:ice-cream $13.38
assets:checking
2025/07/07 * GIANT
expenses:groceries:giant $77.15
assets:checking
2025/07/08 * Verizon
expenses:utilities:phone $158.64
assets:checking
2025/07/09 * Sewer
expenses:utilities:sewer $92.53
expenses:utilities:sewer $2.95 ; credit card fee
assets:checking
2025/07/09 * HEALTHEQUITY INC
assets:health-savings $340
assets:checking
2025/07/09 * Transfer to E-Trade
assets:savings:e-trade $45000
assets:savings:pnc
2025/07/10 * Backblaze
expenses:home:cloud-storage $0.50
assets:checking
2025/07/15 * Sling TV
expenses:subscriptions:slingtv $47.70
assets:checking
2025/07/15 * AMC 0670 CINEMA 9
expenses:entertainment:movies $16.52
assets:checking
2025/07/16 * Soccer Team Dues
expenses:memberships:soccer $116.74
assets:checking
2025/07/16 * PECO
expenses:utilities:electric+gas $247.45
assets:checking
2025/07/16 * GIANT
expenses:groceries:giant $8.84
assets:checking
2025/07/17 * AQUA
expenses:utilities:water $58.55
assets:checking
2025/07/17 * Vacation
expenses:vacation:honey-moon $3670.31
assets:checking
2025/07/18 * EQUINIX SERVICES DIRECT DEP
assets:checking $3722.37
income:equinix
2025/07/18 * Donation
expenses:donations:garden-of-health $103.00
assets:checking
2025/07/21 * Rita's
expenses:food:ice-cream $11.41
assets:checking
2025/07/21 * Spotify
expenses:subscriptions:spotify $12.71
assets:checking
2025/07/21 * Panera
expenses:food:takeout $29.53
assets:checking
2025/07/21 * HBO
expenses:subscriptions:hbo $18.35
assets:checking
2025/07/21 * Halmari Tea (Amarawaite)
expenses:food:tea $67.00
assets:checking
2025/07/21 * Home Depot
expenses:home:maintenance $299.52
assets:checking
2025/07/22 * Internet
expenses:utilities:internet $79.49
assets:checking
2025/07/23 * Jamison Lawn - Treatment
expenses:home:lawn-care $134.37
expenses:credit-card
2025/07/24 * Steam Games
expenses:entertainment:steam $8.47
assets:checking
2025/07/24 * Interest
assets:checking $0.26
income:interest
2025/07/25 * Healthy Gamer
expenses:subscriptions:healthy-gamer $9.99
assets:checking
2025/07/25 * Jamison Lawn - Mowing
expenses:home:lawn-care $58.00
expenses:credit-card
2025/07/28 * Farmhouse
expenses:food:takeout $62.40
assets:checking
2025/07/28 * Wawa
expenses:food:takeout $18.82
assets:checking
2025/07/28 * Train to philly
expenses:travel:train $16.50
expenses:travel:parking $2.00
assets:checking
2025/07/28 * Food in Philly
expenses:food:takeout $10.03
expenses:food:takeout $16.06
assets:checking
2025/07/28 * NYTimes
expenses:subscriptions:nytimes $23.00
assets:checking
2025/07/29 * Computer Infra (Geek-Hub)
expenses:subscriptions:usenet $12.00
; fee for international transaction
expenses:subscriptions:usenet $0.36
assets:checking
2025/07/29 * Allstate Premium
expenses:insurance $249.95
assets:checking
2025/07/31 * Sporty's E6B APP
expenses:flying:apps $10.59
assets:checking
2025/07/31 * Interest E-Trade
assets:savings:e-trade $613.71
income:interest
2025/08/01 * EQUINIX SERVICES DIRECT DEP
assets:checking $3876.80
income:equinix
2025/08/01 * Books
expenses:entertainment:books $10.60
assets:checking
2025/08/01 * MORTGAGE SERV CT MTG PAYMT
expenses:mortgage $2565.89
assets:checking
2025/08/02 * Jamison Lawn - Mowing
expenses:home:lawn-care $58.30
expenses:credit-card
2025/08/04 * Mission BBQ
expenses:food:takeout $35.59
assets:checking
2025/08/04 * Steam
expenses:entertainment:steam $2.11
assets:checking
2025/08/04 * Wawa
expenses:food:takeout $6.22
assets:checking
2025/08/04 * Dairy Bar
expenses:entertainment:ice-cream $16.61
assets:checking
2025/08/04 * Burger Bar
expenses:food:takeout $28.56
assets:checking
2025/08/04 * Emil's Bday Ice Cream
expenses:entertainment:ice-cream $33.26
assets:checking
2025/08/04 * Foreflight
expenses:aviation:foreflight $132.50
assets:checking
2025/08/05 * Starbucks
expenses:food:takeout $5.53
assets:checking
2025/08/07 * Haircut
expenses:personal:haircut $44.40
assets:checking
2025/08/08 * Casey Muratori Computer Enhance
expenses:subscriptions:computer-enhance $108.00
assets:checking
2025/08/08 * Phone Bill
expenses:utilities:phone $124.57
assets:checking
2025/08/08 * Transfer to HealthEquity
savings:health $340.00
assets:checking
2025/08/08 * Jamison Lawn - Mowing
expenses:home:lawn-care $58.30
expenses:credit-card
2025/08/11 * Jill Shared Rent
assets:checking $905.00
income:rent
2025/08/11 * Imprint
expenses:food:takeout $34.34
assets:checking
2025/08/11 * Backblaze (cloud storage)
expenses:home:cloud-storage $0.53
assets:checking
2025/08/11 * Autozone
expenses:auto:maintenance $154.73
assets:checking
2025/08/11 * Giant
expenses:groceries:giant $16.89
assets:checking
2025/08/11 * Autozone
expenses:auto:maintenance $7.41
assets:checking
2025/08/11 * Leading Edge
expenses:aviation:account $1000.00
assets:checking
2025/08/12 * Barnes & Noble
expenses:aviation:books $58.24
assets:checking
2025/08/12 * Amazon
expenses:shopping $14.83
assets:checking
2025/08/13 * Owowcow
expenses:food:ice-cream $12.01
assets:checking
2025/08/15 * EQUINIX SERVICES DIRECT DEP
assets:checking $4153.54
income:equinix
2025/08/15 * Sling
expenses:subscriptions:sling $54.05
assets:checking
2025/08/15 * PECO
expenses:utilities:electric+gas $263.29
assets:checking
2025/08/15 * Jamison Lawn - Mowing
expenses:home:lawn-care $58.30
expenses:credit-card
2025/08/16 * Ez-pass
expenses:travel:ez-pass $70.00
expenses:credit-card
2025/08/18 * Old Town Pub
expenses:food:takeout $41.47
assets:checking
2025/08/18 * Bent Iron Brewing
expenses:food:takeout $21.76
assets:checking
2025/08/18 * Sundae School
expenses:food:ice-cream $40.66
assets:checking
2025/08/18 * AQUA
expenses:utilities:water $72.20
assets:checking
2025/08/18 * Capital One
expenses:credit-card $449.31
assets:checking
2025/08/19 * FRAUDULENT CHARGE
;; I am going to dispute this
expenses:fraud $129.00
assets:checking
2025/08/20 * Spotify
expenses:subscriptions:spotify $12.71
assets:checking
2025/08/20 * HBO
expenses:subscriptions:hbo $18.35
assets:checking
2025/08/20 * FIOS
expenses:utilities:internet $79.99
assets:checking
2025/08/20 * Garage Beam Replacement
expenses:home:maintenance $3500
assets:checking
2025/08/21 * Runescape Membership
expenses:subscriptions:runescape $31.79
expenses:credit-card
2025/08/22 * OOKA
expenses:food:takeout $55.05
assets:checking
2025/08/25 * Jersey Mikes
expenses:food:takeout $33.75
assets:checking
2025/08/25 * Airport parking
expenses:traval:parking $7.00
assets:checking
2025/08/25 * NYTimes
expenses:subscriptions:nytimes $23.00
assets:checking
2025/08/25 * Healthy Gamer
expenses:subscriptions:healthy-gamer $9.99
assets:checking
2025/08/25 * Interest
assets:checking $0.28
income:interest
2025/08/25 * Jamison Lawn - Mowing
expenses:home:lawn-care $58.30
expenses:credit-card
2025/08/26 * Jamison Lawn - Treatment
expenses:home:lawn-care $111.55
expenses:credit-card
2025/08/27 * Allstate
expenses:insurance $249.95
assets:checking
2025/08/29 * Jamison Lawn - Mowing
expenses:home:lawn-care $58.30
expenses:credit-card
2025/08/29 * EQUINIX SERVICES DIRECT DEP
assets:checking $4153.54
income:equinix
2025/08/29 * Owowcow
expenses:food:ice-cream $13.61
assets:checking

144
accounting/personal.ledger Normal file
View File

@@ -0,0 +1,144 @@
2025/01/01 * Opening Balance
Assets:Checking $9533.21
Assets:Savings $117837.88
Assets:Stocks:CrowdStrike 6 CRWD
Assets:Stocks:Equinix 95 EQIX
Assets:ETF:SPY 8 SPY
Assets:ETF:Vanguard 71.225 VOO
Assets:E-Trade:Cash $889.24
Equity:Opening Balance
2025/01/09 * Transfer to E-Trade
Assets:E-Trade:Cash $1500
Assets:Savings
2025/01/31 * Interest
Assets:Savings $318.31
Income:Interest
2025/01/03 * Equinix
Assets:Checking $3669.05
Income:Salary
2025/01/02 * Duolingo Subscription
Expenses:Subscriptions:Duolingo $63.59
Assets:Checking
2025/01/02 * Mortgage
Liabilities:Mortgage $2565.89
Assets:Checking
2025/01/03 * Venmo
Expenses:Food:Take Out $30.69
Assets:Checking
2025/01/06 * Trash
Expenses:Utilities:Trash $14.64
Assets:Checking
2025/01/06 * Groceries
Expenses:Food:Groceries $109.85
Assets:Checking
2025/01/07 * Bagel Barn
Expenses:Food:Take Out $19.53
Assets:Checking
2025/01/08 * Haircut
Expenses:Personal:Haircut $35.00
Assets:Checking
2025/01/08 * HSA
Assets:HSA $340.00
Assets:Checking
2025/01/09 * PECO
Expenses:Utilities:Electric $167.24
Expenses:Utilities:Gas $188.19
Assets:Checking
2025/01/13 * Farmhouse
Expenses:Food:Take Out $51.72
Assets:Checking
2025/01/13 * Barnes & Nobles
Expenses:Gifts $25.37
Assets:Checking
2025/01/13 * Post Office
Expenses:Gifts $18.95
Assets:Checking
2025/01/13 * NY Times
Expenses:Subscriptions:NY Times $23.00
Assets:Checking
2025/01/13 * Gong Cha
Expenses:Food:Take Out $14.00
Assets:Checking
2025/01/15 * Recurring charge?
Expenses:Misc $5.99
Assets:Checking
2025/01/15 * Haircut
Expenses:Personal:Haircut $44.00
Assets:Checking
2025/01/15 * Subscription?
Expenses:Subscriptions:TBD $47.70
Assets:Checking
2025/01/15 * Water Bill
Expenses:Utilities:Water $67.07
Assets:Checking
2025/01/15 * Giant
Expenses:Food:Groceries $103.87
Assets:Checking
2025/01/17 * Equinix
Assets:Checking $3669.04
Income:Salary
2025/01/21 * Amazon
Expenses:Shopping:Amazon $161.96
Assets:Checking
2025/01/21 * Spotify
Expenses:Subscriptions:Spotify $12.71
Assets:Checking
2025/01/21 * HBO
Expenses:Subscriptions:HBO $18.35
Assets:Checking
2025/01/21 * Misc Spending
Expenses:Shopping:Misc $97.00
Assets:Checking
2025/01/22 * Wegmans
Expenses:Food:Groceries $133.46
Assets:Checking
2025/01/22 * Internet
Expenses:Utilities:Internet $89.99
Assets:Checking
2025/01/27 * Checkign Interest
Assets:Checking $0.11
Income:Interest
2025/01/27 * Farmhouse
Expenses:Food:Take Out $25.26
Assets:Checking
2025/01/27 * Chinese Food
Expenses:Food:Take Out $56.45
Assets:Checking
2025/01/27 * Venmo
Expenses:Food:Take Out $30.00
Expenses:Food:Take Out $10.00
Assets:Checking

View File

@@ -0,0 +1,8 @@
#+TITLE: Guide for Filing Taxes
* Documents
- W2 from ADP
- 1099-INT from PNC
- Etrade Supplement offsets the RSU
- 1098 From mortgage company

35
accounting/wedding.ledger Normal file
View File

@@ -0,0 +1,35 @@
2024/11/30 * Opening Balances
Assets:WeddingFund:AdamChecking $40000
Assets:WeddingFund:JillianCard $1000
Equity:Opening Balances
2024/12/01 * (111) Photographer 40% Deposit
Expenses:Vendor:Photographer $2077.60
Assets:WeddingFund:AdamChecking
2024/12/01 * Penn Oaks Golf Club Initial Deposit
Expenses:Venue:Deposit $1000
Assets:WeddingFund:JillianCard
2024/12/01 * (112) Penn Oaks Golf Club Second Deposit
Expenses:Venue:Deposit $3000
Assets:WeddingFund:AdamChecking
2024/12/08 * EBE Deposit
Expenses:Vendor:DJ $700.00
Assets:WeddingFund:AdamChecking
2025/02/07 * Penn Oaks Golf Club Third Deposit
Expenses:Venue:Deposit $4000
Assets:WeddingFund:JillianBank
2025/05/24 ! Photographer Remainder
Expenses:Vendor:Photographer $3116.40
Assets:WeddingFund:AdamChecking
2024/05/24 * EBE Remainder
Expenses:Vendor:DJ $2075.00
Assets:WeddingFund:AdamChecking
2025/05/28 Penn Oaks Golf Club remaining

7
accounting/wedding.org Normal file
View File

@@ -0,0 +1,7 @@
* [0/8] Still need to track
- [ ] Flower costs
- [ ] Hair
- [ ] Makeup
- [ ] Hotel Accommodation
- [ ] Final Guest Count
- [ ] philadelphia officients

48
devenvs/papi.devenv.nix Normal file
View File

@@ -0,0 +1,48 @@
{ pkgs, ... }:
{
# https://devenv.sh/basics/
env.GREET = "devenv";
# https://devenv.sh/packages/
packages = [
pkgs.nats-server
pkgs.natscli
pkgs.libxml2
pkgs.icu
pkgs.postgresql_12
pkgs.memcached
pkgs.curl
];
# https://devenv.sh/scripts/
scripts.hello.exec = "echo hello from $GREET";
# enterShell = ''
# '';
# https://devenv.sh/languages/
languages.ruby.enable = true;
languages.ruby.package = pkgs.ruby_3_0;
# https://devenv.sh/pre-commit-hooks/
# pre-commit.hooks.shellcheck.enable = true;
# https://devenv.sh/processes/
# processes.ping.exec = "ping example.com";
processes.nats.exec = "nats-server";
services.postgres.enable = true;
services.postgres.package = pkgs.postgresql_12;
services.postgres.initialScript = ''
CREATE USER postgres SUPERUSER;
'';
services.rabbitmq.enable = true;
services.memcached.enable = true;
# See full reference at https://devenv.sh/reference/options/
}

View File

@@ -0,0 +1,328 @@
#+TITLE: Incident 2590
#+AUTHOR: Adam Mohammed
#+DATE: May 2, 2024
* Starting out
There are CPUs missing their ProcessorComponent information.
** Get a list of affected hardware
#+BEGIN_SRC ruby
affected_servers = []
Hardware::Server.find_in_batches do |hbatch|
hbatch.each do |h|
affected_servers << h unless h.components.any? { |c| c.type == "ProcessorComponent" }
end
end
#+END_SRC
#+DATE:
#+BEGIN_EXAMPLE
1685 total affected
#+END_EXAMPLE
** Classify the affected hardware by class and plan
#+BEGIN_SRC ruby
affected_server_types = Hash.new(0)
affected_servers.each do |h|
affected_server_types[h.class] += 1
end
#+END_SRC
#+BEGIN_EXAMPLE ruby
irb(main):269:0> affected_server_types
=> {"Hardware::StorageAppliance"=>170, "Hardware::Open19Node"=>195, "Hardware::Server"=>1319, "Hardware::StorageServer"=>1}
#+END_EXAMPLE
#+BEGIN_SRC ruby
affected_plan_types = Hash.new(0)
affected_servers.each do |h|
next unless h.plan.present?
affected_plan_types[h.plan.slug.to_s] += 1
end; nil
#+END_SRC
#+BEGIN_EXAMPLE ruby
{"storage.custom"=>102,
"m3.large.x86"=>329,
"c3.small.x86"=>120,
"m3.small.x86"=>143,
"n2.xlarge.x86"=>23,
"c2.medium.x86"=>124,
"c3.medium.x86"=>396,
"netapp.storage"=>16,
"m2.xlarge.x86"=>31,
"nvidia3.a100.medium"=>1,
"t3.small.x86"=>13,
"n3.xlarge.x86"=>155,
"w3amd.75xx24c.512.8160"=>102,
"s3.xlarge.x86"=>29,
"appliance.dell.hci.vxrail.opt-m.x86"=>12,
"m3.large.opt-c2"=>3,
"nvidia3.a30.medium"=>11,
"purestorage"=>6,
"a3.large.opt-s4a5n1.x86"=>17,
"nvidia3.a30.large"=>3,
"n3.xlarge.opt-m4"=>4,
"storage.dell"=>14,
"nvidia3.a40.medium"=>9,
"w3amd.7402p.256.8160"=>1,
"a4.lg"=>5,
"a3.large.x86"=>1,
"x.large.arm"=>1,
"w3amd.75xx24c.256.4320"=>1,
"npi.testing"=>1,
"m3.large.opt-c2m4"=>1,
"a3.large.opt-s4a1"=>1,
"w3amd.75xx24c.256.8160"=>1,
"c3.large.arm64"=>2}
#+END_EXAMPLE
** What hardware is missing plan information
#+BEGIN_SRC ruby
missing_plan = []
affected_servers.each do |h|
missing_plan << h unless h.plan.present?
end; nil
#+END_SRC
#+BEGIN_EXAMPLE ruby
irb(main):289:0> missing_plan.pluck(:id, :type, :state)
=>
[["2556229f-3da0-4056-96dc-ce820af30ba3", "Hardware::Server", "enrolled"],
["4ca367f4-33c2-494f-8227-bed6c0d8bd8d", "Hardware::Server", "enrolled"],
["8504ffdf-24d7-453f-9a49-94a7cba3f9ae", "Hardware::StorageAppliance", "enrolled"],
["8b383a51-2a45-4d02-aafa-f31b159e31b6", "Hardware::Server", "enrolled"],
["a20a6442-7185-4c49-bfcf-5359fe22cd9f", "Hardware::StorageAppliance", "enrolled"],
["e2ff6fec-a70a-42e6-afb1-93f57c6a30f1", "Hardware::Server", "enrolled"],
["f9670617-0cde-4db6-94de-d7ec495881e7", "Hardware::StorageAppliance", "enrolled"]]
#+END_EXAMPLE
I think it's safe to not worry about these because customers can't deploy them yet.
** What hardware plan versions don't have the required CPU information?
#+BEGIN_SRC ruby
def valid_cpu_data?(hardware)
required_keys = ["cores", "count"]
return false unless hardware.plan_version.present? && hardware.plan_version.specs["cpus"].present?
cpu_data = hardware.plan_version.specs["cpus"][0]
required_keys.map do |k|
cpu_data.keys.include? k
end.all?
end
affected_plan_versions = Hash.new(0)
affected_servers.each do |h|
next unless h.plan_version.present?
affected_plan_versions[h.plan_version.slug] += 1 unless valid_cpu_data?(h)
end; nil
#+END_SRC
** These are the the ones that currently are not being billed properly
#+BEGIN_SRC ruby
broken_billables = [
"39b7f377-af6d-437b-a99b-10d9d4fd7b53",
"d2deb4c8-446f-4679-a7f5-60edf7745e23",
"e9c50e27-9f74-477b-9210-0e277537a336",
"88a6bf4a-b63e-4c7e-8c20-5d5949ba62f9",
"5b205c53-af64-421e-b2b6-39f5923d4f3f",
"604c38d9-1f8c-4600-bb29-a0d5e1aa504a",
"d4914c80-c657-4ff2-86a1-8f41d90af0a9",
"f6f087f3-3e7c-457f-8943-a6864a8a0b97",
"88d2e8ee-6ec1-450b-9982-63d8220a1011",
"a47f38f9-c2ac-46ba-bb16-68e659b89183",
"e47a3d2e-13a0-444a-8164-ebe54fbc43b1",
"840ce4fd-a300-4a7b-96a3-140e0bf988b4",
"68e0feb1-8146-4b08-a591-15806a0f61a0",
"0e1ef1c6-2de7-40b3-91e0-44474f32fafb",
"161d4f10-4362-4028-b237-b7649f87eb09"
]
#+END_SRC
** Do these pieces of hardware have the information I need to fix the data?
#+BEGIN_SRC ruby
def my_valid_cpu_data?(hardware)
required_keys = ["cores", "count"]
return false unless hardware.plan_version.present? && hardware.plan_version.specs["cpus"].present?
cpu_data = hardware.plan_version.specs["cpus"][0]
required_keys.map do |k|
cpu_data.keys.include? k
end.all?
end
can_be_fixed = []
broken_hardware.each do |h|
next unless h.plan_version.present?
can_be_fixed << h if my_valid_cpu_data?(h)
end; nil
#+END_SRC
** Actually fix the components
#+BEGIN_SRC ruby
def create_processor_component(h_id, cpu_data, index)
cpu = ProcessorComponent.new
cpu.name = cpu_data["name"]
cpu.type = ProcessorComponent.to_s
cpu.vendor = cpu_data["manufacturer"]
cpu.model = cpu_data["model"]
cpu.serial = "CPU#{index}"
cpu.firmware_version = "N/A"
cpu.data = {
"clock" => cpu_data["speed"],
"cores" => cpu_data["cores"],
}
cpu.hardware_id = h_id
cpu
end
#+END_SRC
#+BEGIN_SRC ruby
cant_fix = []
finished = []
broken_hardware.each_with_index do |h, i|
unless h.plan_version.present? && h.plan_version["cpus"].present?
cant_fix << h
next
end
cpu_data = h.plan_version["cpus"][0]
core_count = h
c = create_processor_component(h.id, ,
#+END_SRC
#+BEGIN_SRC ruby
"04af7a5f-6330-4095-b525-ea8a596db035"
"111fc3d1-7002-4c22-9d29-e2539c610bb1"
"15a4071c-ddd9-4fc5-b9b9-35d5831a9de3"
"19798268-39ca-454e-a7de-cab1a9cae4a5"
"1df18ad3-3189-4b87-9654-7d9b062d553d"
"20388df4-c645-445c-8563-114213c85604"
"2cafd1cc-a6ba-4caf-849d-969ac22eddca"
"2cc2596e-8045-49ea-8274-5b84e27a643c"
"2d4941a3-f0ce-454c-b9dc-6f5bf3381519"
"2e13125c-9794-4392-ab7c-0dbb10b3b4f7"
"2e24c7dc-a219-45c2-ae79-1aa0eb367d56"
"2ffa9123-6466-49a3-ac81-84a7e0dcb437"
"35d423fa-e119-4c9b-8eed-9193a4037b18"
"39888ace-88cb-49f8-8eef-f1ec14c36d2c"
"4470e1bc-0c1e-47ac-99e0-8f23cc075228"
"56c91002-4e8e-4ab8-b653-d8fb459ad186"
"59daefde-f2c2-42c2-8bc9-90d5a00e98e9"
"5bf121bf-1b11-429b-9f73-11206e9f438c"
"5f81d1f6-9c7d-41b0-bb02-a4cb5b31b1ab"
"613d4464-8c0b-44a8-8bcc-9ece50b17ce5"
"62e344ed-2fe1-4778-92e8-0dd386cf0590"
"630cf74d-d689-496c-b29f-5f094c4455d5"
"649fa2b1-675c-4433-9256-e7632092ab8a"
"66f1ef27-3310-40c3-8d06-6c889ddc1e15"
"6ac54a10-c47d-446d-8ef5-d4131bdc746c"
"6c7e5828-68fe-4114-a8e8-1e3ce9747de0"
"773240e6-7f9b-472f-847e-0a9f914e4493"
"77f9ba1e-bcd4-46c9-963a-b861fb573ab2"
"7fe941fd-4533-411e-93ad-832632910cf2"
"858a0e53-56ec-4b77-b852-8371f3ead1bd"
"85b1ab1b-664b-4d0f-855e-30ccf7f16f50"
"921d04e8-b7b8-4e13-a9f3-f55302d970c1"
"9430eb5e-fbe2-48b0-b180-d94347a5f296"
"a172606f-4d90-41b8-a1f1-0cd1b20aaa7f"
"a8f5d150-0f5b-4a92-9583-0e70473a9b8b"
"a96d685d-b16f-4852-84dd-dd3304b37471"
"aa1f836f-5808-452a-a5bc-884acd3bcd90"
"abc678fd-d92b-4fc9-ad46-bc6316c170c6"
"afce0857-1016-4638-91bf-f67ee9ade423"
"b37bae11-645f-45cf-b55e-20604b5f3030"
"d263768d-c460-4bdd-81fa-c04fe80122cc"
"d4849d7e-8b68-4f14-97f6-0682c20d4706"
"d634a3b5-98ef-4eca-8fe3-3bc4903170c9"
"dff6b6b3-d46e-47c8-8c85-e85f2566893b"
"e248f2f0-b1dc-4e6e-b025-687ea375fe2d"
"e9e17d57-f8dd-4f8a-b31b-6e33c8e25078"
"fa97834e-d71e-4d8f-8fc0-2e8988a05a28"
"fb6c21a5-8640-4e1e-af18-2790f3a79873"
#+END_SRC
1. LicenseActivationID
2. Licensable (an Instance Model)
3. PlanVersion
4. CPU count and CPU cores
5. Update License.data["cores"] = cpu_count * cpu_cores
#+BEGIN_SRC ruby
# this doesn't save the things
def fix_core_count_prime(license_activation)
instance = license_activation.licensable
return "missing instance" unless instance.present?
plan_version = instance.plan_version
return "missing plan_version" unless plan_version.present?
cpu_data = plan_version.specs["cpus"][0]
return "missing cpu_data" unless cpu_data.present? && cpu_data["cores"] && cpu_data["count"]
cpu_count = plan_version.specs["cpus"][0]["count"].to_i
cpu_cores = plan_version.specs["cpus"][0]["cores"].to_i
license = license_activation.license
license.data["cores"] = cpu_count * cpu_cores
license
end
res = broken_license_activations[2..].map do |la_id|
la = LicenseActivation.with_deleted.find(la_id)
return "couldn't find la #{la_id}" unless la.present?
fix_core_count_prime(la)
end
# when I wanted to get the successful ones
res.filter { |item| item.is_a? License }
# when I wanted to see what broke
res.filter { |item| !item.is_a? License }
#+END_SRC
** Are there any windows licenses remaining with 0 cores, that aren't erroring yet?
#+BEGIN_SRC ruby
activations = LicenseActivation.eager_load(:license).eager_load(:licensee_product).where("licensee_products.slug LIKE '%windows%'").all
missing_cores = activations.map do |la|
if la.license.data["cores"] == 0
la
else
nil
end
end.compact
fixed_licenses = missing_cores.map do |la|
fix_core_count_prime(la)
end
#+END_SRC
#+BEGIN_SRC ruby
irb(main):066:0> LicenseActivation.eager_load(:license).eager_load(:licensee_product).where("licensee_products.slug LIKE '%windows%'").where("(licenses.data->>'cores')::integer = 0").count
=> 0
irb(main):067:0> LicenseActivation.eager_load(:license).eager_load(:licensee_product).where("licensee_products.slug LIKE '%windows%'").where("(licenses.data->>'cores')::integer > 0").count
=> 538
#+END_SRC

View File

@@ -0,0 +1,47 @@
#+TITLE: K8s concepts review
#+AUTHOR: Adam Mohammed
#+DATE: September 18, 2023
At one of the meetings I brought up how similar nano-metal felt to a
collection of K8s specifications that make standing up and managing
K8s clusters easier. In this document I'll cover the following topics
at a high level: Cluster API, CNI, CCM, and CSI.
First is the Cluster API, this came about as a means for creating and
managing kubernetes clusters using kubernetes it self. The cluster API
allows an operator to use a so-called "management cluster" to create
other K8s clusters known as "Workload clusters." The cluster API is
NOT part of the core K8s resources, but is implemented as a set of
custom resource definitions and controllers to actually carry out the
desired actions.
A cluster operator can use the cluster-api to create workload clusters
by relying on 3 components: bootstrap provider, infrastructure
provider, and the control plane provider. Nanometal aims at making
provisioning of bare metal machines extensible and scalable by
enabling facilities to carry out the desired operations requested by
the EMAPI. We can think of the EMAPI as the "management cluster" in
this world.
What Metal has today, maps well to the infrastructure provider, since
all the cluster-api has to do is ask for machines with a certain
configuration and the provider is responsible for making that
happen. I think for this project a bulk of this work is figuring out
how we make the infrastructure provider out of our existing
components, but let's put that aside for right now and consider the
rest of the components.
The bootstrap and the control plane providers are concepts that also
seem important to our goal. We want it to be simple for us to enter a
new facility and set up the components we need to start provisioning
hardware. The bootstrap provider, in the cluster-api concepts turns a
server provisioned with a base OS into an operating K8s node. For us,
we probably would also want some process which turns any facility or
existing datacenter, into an equinix metal managed facility.
Once we know about the facility that we need to manage, the concept of
the control plane provider maps well with the diagrams from Nanometal
so far. We'd want some component that installs the required agent and
supporting components in the facilty so we can start to be able to
provide metal services there.

View File

@@ -18,11 +18,11 @@ The API deployment consists of:
** Release Candidate Deployment Strategy
This is a form of a canary deployment strategy. This strategy involves
diverting just a small amout of traffic to the new version, while looking
for an increased error rate. After some time, we assess how the
candidate has been performing. If things look bad, then we scale back
and address the issues. Otherwise we ramp up the amount of traffic
that the pods see.
diverting just a small amount of traffic to the new version, while
looking for an increased error rate. After some time, we assess how
the candidate has been performing. If things look bad, then we scale
back and address the issues. Otherwise we ramp up the amount of
traffic that the pods see.
Doing things this way allows us to build confidence in the release but
it does not come without drawbacks. The most important thing to be
@@ -47,30 +47,18 @@ the two versions are compatible, and can run side-by-side.
* Lessons from Previous Rails Upgrades
We have telemetry set up to monitor the system as a whole, so
identifying whether or not something looks like an issue related to
the upgrade or is unrelated has been left to SMEs intution.
In the rails 5.2->6.0 upgrade we hit a couple issues:
- Rails 6 jobs were not able to be served with 5 workers
- We addressed this before rolling forwards
- Prometheus-client upgrade meant that all the cron jobs succeeded but
failed to report their status.
In the rails 6.1 upgrade we observed a new issue with respect to users
seeing 404s through the portal, after hitting the =/organizations=
endpoint.
- I decided that the scope of the bug was small enough that we were
okay to roll forward.
- Error rates looked largely the same because the symptom that we
observed was an increased number of 403s on the Projects Controller
* Defining key performance indicators
Typically, what I would do (and what I assume Lucas does) is just keep
an eye on Rollbar. Rollbar would capture things that are at least
fundamentally broken that would cause exceptions or errors in
Rails. Additionally, I would keep a broad view on errors by span kind
in honeycomb to see if we were seeing a spike associated with the
release candidate.
Typically, what I would do (and what I assume Lucas does) is just keep an eye on Rollbar. Rollbar would capture things that are at least fundamentally broken that would cause exceptions or errors in Rails. Additionally, I would keep a broad view on errors by span kind in honeycomb to see if we were seeing a spike associated with the release candidate.
- What we were looking at in the previous releases
- Error rates by span kind per version
This helps us know if the error rate for requests is higher in one version or the other. Or if we're failing specifically in proccessing background jobs.
- No surprises in Rollbar
Instead, ideally we'd be tracking some information the system reports that are stable.

View File

@@ -0,0 +1,23 @@
#+TITLE: Scalable API
#+AUTHOR: Adam Mohammed
* Overview
In this document we take a look at the concept of breaking the
monolith from the start. By that I mean, what do we hope to achieve
with the breaking the monolith. From there we can identify the
problems we're trying to solve.
Part of the problem (I have) with the "breaking the monolith" phrase is the
vision is too lofty. That phrase isn't a vision it's snake-oil. The
promised land we hope to get to is a place where teams are able to
focus on delivering business value and new features to customers that
are meaningful, leverage our existing differentiators, and enable new
differentiators.
What do we believe is preventing us from delivering business value quickly
currently? What we identify there is a hypothesis and based on some
level of intuition, so it's a great start for an attempt to optimize
the process. It's even better if we can quantify how much effort is
spent doing these speed-inhibiting activities, so we know we're
optimizing our bottlenecks.

View File

@@ -0,0 +1,166 @@
#+TITLE: Session Scheduler
* Overview
For some API requests, the time it would take to serve the request is
too long for a typical HTTP call. We use ActiveJob from Rails to
handle these type of background jobs. Typically, instead of servicing
the whole request before responding back to the client, we'll just
create a new job and then immediately return.
Sometimes we have jobs that need to be processed in a specific order,
and this is where the session scheduler comes in. It manages a number
of queues for workloads, and assigns a job to that queue dynamically.
This document talks about what kind of problems the scheduler is meant
for, how it is implemented and how you can use it.
* Ordering problems
Often in those background jobs, there are some ordering constraints
that we have between the jobs. In some networking APIs for example,
things must happen in some order to achieve the desired state.
The simplest example of this is assigning and unassigning a VLAN to a
port. You can quickly make these calls to the API in succession, but
it may take some time for the actual state of the switch to be
updated. If these jobs are processed in parallel, depending on the
order in which they finish changes the final state of the port.
If the unassign finshes first, then the final state the user will see
is that the port is assigned to the VLAN. Otherwise, it'll end up in
the state without a VLAN assinged.
The best we can do here is make the assumption that we get the
requests in the order that the customer wanted operations to occur
in. So, if the assign came in first, we must finish that job before
processing the unassign.
Our api workers that serve the background jobs currently fetch and
process jobs as fast as they can with no respect to ordering. When
ordering is not important, this method works to process jobs quickly.
With our networking example though, it leads to behavior that's hard
to predict on the customer's end.
*
We have a few constraints for creating a solution to the ordering
problem. Using the VLANS as an example.
- Order of jobs must be respected within a project, but total ordering
is not important (e.g. Project A's tasks don't need to be ordered
with respect to Project B's tasks)*
- Dynamically spining up consumers and queues isn't the most fun thing
in Ruby, but having access to the monolith data is required at this
point in time.
- We need a way to map an arbitrary of projects down to a fixed set of
consumers.
- Although total ordering doesn't matter, we do want to be somewhat
fair
Let's clarify some terms:
- Total Ordering - All events occur in a specific order (A1 -> B1 ->
A2 -> C1 -> B2 -> C2 -> B3)
- Partial ordering - Some events must occur before others, but the
combinations are free (e.g. A1 must occur before A2 which must occur
before A3, but [A1,A2,A3] has no
relation to B1).
- Correctness - Jobs ordering constraints are honored.
- Fairness - If there are jobs A1, A2....An and jobs B1, B2....Bn both
are able to get serviced in some reasonable amount of time.
* Session scheduler
** Queueing and Processing Jobs In Order
For some requests in the Metal API, we aren't able to fully service
the request in the span of a HTTP request/response time. Some things
might take several seconds to minutes to complete. We rely on Rails
Active Job to help us achieve these things as background
jobs. ActiveJob lets us specify a queue name, which until now, has
been a static name such as "network".
The API runs a number of workers that are listening on these queues
with multiple threads, so we can pick up and service the jobs quickly.
This breaks down when we require some jobs to be processed serially or
in a specific order. This is where the =Worker::SessionScheduler=
comes in. This scheduler dynamically assigns the queue name for a job
so that it is accomplished in-order with other related jobs.
A typical Rails job looks something like this:
#+begin_src ruby
class MyJob < ApplicationJob #1
queue_as :network #2
def perform #3
# do stuff
end
end
#+end_src
1. We can tell the name of the job is =MyJob=
2. Show the queue that the job will wait in before getting picked up
3. Perform is the work that the consumer that picks up the job will do
Typically, we'll queue a job to be peformed later within the span of
an HTTP request by doing something like =MyJob.perform_later=. This
puts the job on the =network= queue, and the next available worker
will pull the job off of the queue and then process it.
In the case where we need jobs to be processed in a certain order it
might look like this:
#+begin_src ruby
class MyJob < ApplicationJob
queue_as do
project = self.arguments.first #2
Worker::SessionScheduler.call(session_key: project.id)
end
def perform(project)
# do stuff
end
end
#+end_src
Now instead of =2= being just a static queue name, it's dynamically
assigned based on what the scheduler assigns.
The scheduler will use the "session key" to see if there are any other
jobs queued with the same key, if there are, you get sent to the same
queue.
If there aren't, you'll get sent to the queue with the least number of
jobs waiting to be processed, and any subsequent requests with the
same "session key" will follow.
Just putting jobs in the same queue isn't enough though, because if we
process the jobs from a queue in parallel, then we end up in a
situation where we can still have jobs completing out of order. We
have queues designated to serve this purpose of processing things in
order. We're currently leveraging a feature on rabbitmq queues that
lets us guarantee that only one consumer is ever getting the jobs to
process. We rely on the configuration of that consumer to only use a
single thread as well to make sure we're not doing things out of
order.
This can be used to do any set of jobs which need to be ordered,
though currently we're just using it for Port VLAN management. If you
do decide to use this, you need to make sure that all the jobs which
are related share some attribute so you can use that as your "session
key" when calling into the scheduling service.
The scheduler takes care of the details of managing the queues, so
once all the jobs for a session are completed, that session will get
removed and the next time the same key comes in it'll get reallocated
to the best worker. This allows us to rebalance the queues over time
so we prevent customers from having longer wait times despite us doing
things serially.

View File

@@ -0,0 +1,8 @@
4/30 18:29 - Incident 2590 created
4/30 18:34 - List posted of affected servers, zero core count causing issues billing for license activations
4/30 18:35 - Nautilus goalie asks what needs to be changed
4/30 18:37 - Ask is to have Nautilus engineer make prod data changes to allow a billing run to succeed
4/30 18:37 - Nautilus goalie tries to figure out if theres time to test before making the change
4/30 18:53 - Urgency is due to billing run set to start at 5/1 1:30 UTC
4/30 19:33 - Determined scope of issue to be Instances with OSes that require core counts for licenses

View File

@@ -0,0 +1,35 @@
#+TITLE:
2024-08-11 10:58 UTC - API 500s increased to 1500/min
2024-08-11 11:14 UTC - Nautilus Goalie paged for 500 errors
2024-08-11 11:23 UTC - Opened Incident 2030
2024-08-11 11:25 UTC - Rollbar errors indicate issues with memcached
2024-08-11 11:25 UTC - Honeycomb shows that all traffic is being served 500s
2024-08-11 11:26 UTC - Increased memcached memory limit in an attempt to resolve Out of Memory errors
2024-08-11 11:35 UTC - Called for status page
2024-08-11 11:53 UTC - Started to see successful responses for production traffic
2024-08-11 11:58 UTC - Re-occurrence of 500s
2024-08-11 12:00 UTC - Update from AppSec that Kona alerts for a attack on the API
2024-08-11 12:01 UTC - Cloudflare graphs posted that showed sharp drop in traffic at around 7:45 (not sure about granularity)
2024-08-11 12:09 UTC - Observed log line in splunk indicating timeouts when talking to memcached
2024-08-11 12:12 UTC - Noticed K8s probes failing and causing application restarts
2024-08-11 12:23 UTC - Posted graph of application cycling between healthy and not every 5 minutes
2024-08-11 12:25 UTC - Determined the liveness probes were failing and causing the restarts after 5 minutes
2024-08-11 12:26 UTC - Increased timeout to accommodate from 3s to 10s
2024-08-11 12:33 UTC - API served traffic for CF to bring origins back online
2024-08-11 12:36 UTC - Metal API is up and serving requests but most requests are timing out, P95 is 100x what it is normally
2024-08-11 13:33 UTC - Front end pods being removed from serving traffic by readiness probes failing
2024-08-11 13:33 UTC - Suspected issue with priming the cache, increased fronted pods to help alleviate request pressure
2024-08-11 13:33 UTC - Looking to determine root cause of network timeouts
2024-08-11 13:44 UTC - Posted memcache stats showing extremely high hit rate despite being nearly empty
2024-08-11 14:09 UTC - Determined logging on MemcacheD caused CPU throttling of the pod
2024-08-11 14:18 UTC - Reduced log level on memcached pods and saw CFS throttling resolve
2024-08-11 14:31 UTC - API back and serving requests for a short period
2024-08-11 15:01 UTC - Updated memcached item_size_max to address Value too large errors from Flipper
2024-08-11 15:50 UTC - Established Confidence in root cause
2024-08-11 16:04 UTC - API PR to disable caching feature flags in memcached
2024-08-11 17:20 UTC - Deploying API PR to remove caching feature flags
2024-08-11 18:08 UTC - Moved incident status to Monitoring
2024-08-11 18:12 UTC - Metal API up and responding with slightly higher P95
2024-08-11 22:29 UTC - Changed incident status to resolved

View File

@@ -0,0 +1,152 @@
* Bootstrapping trust in a capability model
There are two basic ways to start the chain of trust with a capability
model, either the resource server is started with a set of root
capabilities that governs all the resources, or ambient authority is
used to provide the initial trust.
Let's take the IP example further, some IPAM service is supposed to
govern the RFC1918 space for Equinix. Its provides an API for
downstream services to request blocks of arbitrary size, so they can
further allocate smaller blocks from those blocks.
I think the easiest way is just to use ACLs for the initial set of
capabilities, and once the service is live, the majority of requests
would be using wrapped resources. Let's say this IPAM service allows
creation of "root" ranges through a create range API.
An operator could create the range for 10.0.0.0/8. And then create a
wrapped resource to delegate to downstream services.
If MCN, Metal and Fabric are all interested in sharing this IP space,
we could have the service request a IP range of a specific size. Then
the operator could create wrapped resources for larger ranges for each
of the business units, and then hand those to the operators for the
MCN/Metal and Fabric services.
Once the dependent service gets their wrapped resource, they can
further divide the resources if they have multiple services that want
to allocate from distinct pools within that space, or they can all
share the capability as-is.
The dependent service could then make direct calls to the IPAM service
to make "assignments" in the IPAM service to mark that that IP is
currently in use within the larger range.
Eventually, we want to get away from this operator X does operation
for operator Y, because it means that
Let's assume we made an IPAM service that has the following endpoints:
- CREATE IP Range
Adds an entry to allow the IPAM service to govern the range
Returns a resource ID
- LIST IP Ranges
Lists all the ranges governed by the IPAM service
- GET IP RANGE
Shows details about the IP range, such as how much of the range is
allocated.
Can be accessed by either by ACL, or capability
- DELETE IP Range
Remove an IP range from being governed by the IPAM service
- CREATE IP Range Request
Request a capability which lets a service allocate from this IP Range
- GET/LIST IP Range Request
Show the status of a request
- PUT IP Range Request
Allows approving/denying the request
- DELETE IP Range Request
Removing an IP range request
- CREATE IP Assignment
Only accepts a wrapped resource, marks IP Address or subnet as allocated.
Now we consider how we get to be able to start using
capabilities. Initially, an operator needs to start the service by
creating some IP ranges that the IPAM service is responsible for. This
endpoint can use ACLs to check that the operator has the authorization
to create ranges, and then the service can allow requests.
Next, some service, like the Metal Provisioner needs to assign IPs to
instances so they can talk to each other over the private
network. Initially the provisioner doesn't have access to any IP
ranges, so it sends a request for a /16. That /16 request is then
approved by an IPAM operator, and the provisioner receives a
capability that allows manipulating assignments on that range.
The IAM operator portion could be removed
----
IPAM Worked Example
Let's assume we have an IPAM system which governs 10.0.0.0/8, and
other IP blocks. We have a service, such as LBaaS which needs to
assign Private IPs to customer Load balancer instances. The LBaaS
service needs to assign unique IPs to the load balancer instances so
that customer can route traffic to their metal instances.
The LB service needs to reach out to the IPAM service to pull an IP,
and to do that, it must request it within a block represented by a
wrapped resource. So how does the service initially obtain this
wrapped resource?
On first startup, the LBaaS service knows it doesn't have the
capability to assign IPs becasue it doesn't have a wrapped resource
for the range. It reaches out authenticated as itself to the IPAM
service, and requests a =/16=. That request is authorized just by the
fact that the LB service has the correct audience to talk to the IPAM
service.
The request is recorded, and some approval process is done by the IPAM
operators, or is determined by buisiness logic. Once approved, the
wrapped resource for the requested range is issued to the LBaaS
service, which it stores. Now, whenever an IP is needed, it makes an
assignment under that wrapped resource.
Internally, the IPAM service needs to record that a block is currently
active, and that the capability sent to the LB service references
it. As an example, let's say the 10.0.0.0/8 is represented by the root
resource identifier `ntwkblk-a1b2c3`. When the LB service requests a
=/16=, a new IP reservation resource is created `ntwkipr-xyzxyz`, and
once approved, a capability is created, by calling,
WrapResource(ntwkipr-xyzxyz, [create_assignment, read_assignment, delete_assignment],
{}), which produces a wrapped resource with ID
`ntwkipr-u8e82i.qeoalf` and the IPAM service distributes this back to
the LB service.
When the LB service wishes to record an assignment to that block, it
can make a request to the IPAM services assignment endpoint,
(e.g. POST /ip-reservations/ntwkipr-u8e82i.qeoalf/assignments). From
there, the IPAM service calls, UnwrapResource(ntwkipr-u8e82i.qeoalf,
[create_assignment], {}), which succeeds because the wrapped resource
is valid, the verifier matches, and the operation is allowed for that
ID. And the assignment is created.
This example describes a manual approval process and doesn't
necessarily describe how the async process is implemented for yieling
the capability back to the requesting service. The manual approval
process could easily be replaced by setting limits per identity, and
requiring manual approval for higher limits, e.g. Any product can
request a up to a /24, but if you want anything larger, you'll need
manual approval by the governing team. In that case, the system
becomes more dynamic and teams can self-serve their requests. The
distribution of the capability must happen over a secure channel as
well, such as a NATS topic that only the requesting service has access
to, or by direct callback API.
Futher delegation is possible as well, where the LB service could ask
the IPAM service to wrap `ntwkipr-u8e82i-qeoalf` another time, but
this time only to perform `read_assignment` and then the LB team can
create operator tools to find details about the assignment from the
IPAM service without having the ability to do damage.

View File

@@ -0,0 +1,7 @@
#+TITLE: EaaS vs EIS Identity
* "We have little new concepts"
We have few new concepts" or "We have no new concepts
"
fixergrid.net/jillian-and-adam-wedding

View File

@@ -0,0 +1,34 @@
#+TITLE: How do Interconnections work for dummiez
#+Author: Adam Mohammed
* User Flows
User starts by making a API call to ~POST
/projects/:id/connections~. When they make this request they are able
to either able to use a full dedicated port, on which they get the
full bandwidth, or they can use a shared port. The dedicated port
promises you get the full bandwidth, but is more costly.
A user is also able to to select whether the connection at metal is
the A side or the Z side. If it's the A-side, then Metal does the
billing, if it's the Z-side, Fabric takes care of the billing.
A-side/Z-Side is a telecom terminology, where the A-side is the
requester and the Z side is the destination. So in the case of
connecting to a CSP, we're concerned with a-side from metal because
that means we're making use of Fabric as a service provider to give us
connection to the CSP within the metro.
If we were making z-side connnections, we'd be granting someone else
in the DC access to our networks.
* Under the Hood
when the request comes in we create
- An interconnection object to represent the request
- Virtual Ports
- Virtual circuits associated with each port
- A service token for each

View File

@@ -0,0 +1,43 @@
#+TITLE: Metal Event Entrypoint
#+AUTHOR: Adam Mohammed
* Problem
We would like other parts of the company to be able to notify Metal about
changes to infrastructure that crosses out of the Metal's business
domain. The concrete example here is for Fabric to tell metal about
the state of interconnections.
* Solution
Metal's API team would like to expose a message bus to receive events
from the rest of the organization.
Metal's API currently sits on top of a RabbitMQ cluster, and we'd like
to leverage that infrastructure. There are a couple of problems we
need to solve before we can expose the RabbbitMQ cluster.
1. RabbitMQ is currently only available within the cluster.
2. Fabric (and other interested parties) exist outside of Metal
firewalls that allow traffic into the K8s clusters.
3. We need to limit blast radius if something were to happen on this shared
infrastructure, we don't want the main operations on Rabbit that Metal
relies on to be impacted.
For 1, the answer is simple expose a path under
`api.core-a.ny5.metalkube.net` that points to the rabbit service.
For 2, we leverage the fact that CF and Akamai are whitelisted to the
metal K8s clusters for the domains `api.packet.net` and
`api.equinix.com/metal/v1`. This covers getting the cluster exposed to
the internet
For 3, we can make use of RabbitMQ [[https://www.rabbitmq.com/vhosts.html][Virtual Hosts]] to isolate the
/foreign/ traffic to that host. This let's us set up separate
authentication and authorization policies (such as using Identity-API
via [[https://www.rabbitmq.com/oauth2.html][OAuth]] plugin) which are absolutely
necessary since now the core infrastructure is on the internet. We are
also able to limit resource usage by Vhost to prevent attackers from
affecting the core API workload.

View File

@@ -0,0 +1,26 @@
Ok, so I met with Sangeetha and Bob from MCNS and I think I have an
idea of what needs to happen for our integrated network for us to
build things like MCNS and VMaaS.
First, you just need two things to be able to integrate at the
boundaries of Metal and Fabric, you need a VNI and you need a USE
port. Metal already has a service which allocates VNIs, so I was
wondering why Jarrod might not have told MCNS about it. Since VNIs and
USE ports are both shared resources that we want a single bookkeeper
over, there's only one logical point to do that today, and that's the
Metal API.
In a perfect world though, the Metal API doesn't orchestrate our
internal network state so specifically, at least I think. It'd be nice
if we could rip out the USE port management from the API and push that
down a layer away from the customer facing API. The end result is we
have internal services Metal API, MCNs, VMaaS all building on our
integrated network, but we still just have a single source of truth
for allocating the shared resources.
Sangeetha got a slice of VNIs and (eventually will have) USE ports for
them to build the initial MCNS product, but eventually we'll want to
bring those VNIs and ports under control of a single service, so we
don't have multiple bookkeeping spots for the same resources.
Jarrod's initial plan was to just build that in to the Metal API, but
if we can,

View File

@@ -0,0 +1,202 @@
#+TITLE: Linux Networking For Fun and Profit
#+DATE: March 30, 2024
* Setting up "hosts" with network namespaces
Network namespaces give us a neat way to simulate networked
machines. This walks through using Linux network namespaces to
configure a set of "hosts" in the following configuration.
#+begin_src
Host 3
/ \
veth1b veth2b
/ \
veth1a veth2a
Host 1 Host 2
#+end_src
- Host 1 - 192.168.65.1
- Host 2 - 192.168.65.2
- Host 3 - 192.168.65.3
In this configuration, even though these are all on the same subnet
Host 1 only has a connection to Host 3 so can't directly reach Host 2.
** Basic network namespace set up
All these steps are performed on Fedora 39 (Linux 6.7.5), but this
should be possible on any modern Linux distro.
First we'll create all the network namespaces
#+begin_src bash
sudo ip netns add host1
sudo ip netns add host2
sudo ip netns add host3
#+end_src
Then we'll create all the (virtual) interfaces we need. These paired
virtual ethernets act as direct connections between the host machines.
#+begin_src bash
sudo ip link add veth1a type veth peer name veth1b
sudo ip link add veth2a type veth peer name veth2b
#+end_src
So far we've only created the interfaces, we haven't assigned them to
our network namespaces, so let's do that now:
#+begin_src bash
sudo ip link set veth1a netns host1
sudo ip link set veth1b netns host3
sudo ip link set veth2a netns host2
sudo ip link set veth2b netns host3
#+end_src
** Point to point connectivity
At this point we've got the hosts mostly configured, each host has the
correct interfaces, but we have to bring them up and assign IPs. Let's
start with assigning ips to just Host 1 and Host 2 to prove we can't
communicate just yet.
#+begin_src bash
sudo ip netns exec host1 ip addr add 192.168.65.1/24 dev veth1a
sudo ip netns exec host1 ip link set veth1a up
sudo ip netns exec host2 ip addr add 192.168.65.2/24 dev veth2a
sudo ip netns exec host2 ip link set veth2a up
sudo ip netns exec host1 ping -c1 192.168.65.2
# this should fail with 100% packet loss
#+end_src
We know there's a path from Host 1 to Host 2 through Host 3 though, but
before we do that, let's just make sure we can communicate
point-to-point from Host 1 to Host 3.
We'll do this by adding an IP to veth1b and bringing it up, and then
pinging that IP from Host 1.
#+begin_src bash
sudo ip netns exec host3 ip addr add 192.168.65.3/24 dev veth1b
sudo ip netns exec host3 ip link set veth1b up
sudo ip netns exec host3 ip link set veth2b up
sudo ip netns exec host1 ping -c1 192.168.65.3
#+end_src
Host 1 to Host 3 succeeds because our veth pair is connected directly.
** Bridging across virtual ethernet interfaces
So that's easy, we can communicate point-to-point, but we still can't
figure out how to get to Host 2 from Host 1. We can even check the ARP
table to see why.
#+begin_src bash
sudo ip netns exec host1 arp
Address HWtype HWaddress Flags Mask Iface
192.168.65.3 ether 1a:60:c6:d9:2b:a0 C veth1a
192.168.65.2 (incomplete) veth1a
#+end_src
ARP isn't able to figure out what mac address is for the owner of
192.168.65.2. We have an veth pair on Host 3 connected to Host1
and another veth pair connected to Host 2, but we can't get from Host
1 to Host 2.
We can solve this at layer 2 by creating a bridge interface that just
sends packets along from one interface to the other.
First let's remove the IP we put on the veth1b.
#+begin_src bash
sudo ip netns exec host3 ip addr del 192.168.65.3/24 dev veth1b
#+end_src
Now let's create that bridge interface, so we can allow the networking
stack to pass packets from veth1b to veth2b.
#+begin_src bash
sudo ip netns exec host3 ip link add br0 type bridge
sudo ip netns exec host3 ip link set veth1b master br0
sudo ip netns exec host3 ip link set veth2b master br0
#+end_src
And now, instead of assigning the IPs to the underlying interfaces,
we'll just assign an IP to the bridge interface and bring it up
#+begin_src bash
sudo ip netns exec host3 ip addr add 192.168.65.3/24 dev br0
sudo ip netns exec host3 ip link set up br0
#+end_src
Let's test our configuration from Host 3, we should now be able to
reach both Host 1 and Host 2 by leveraging our underlying veth
interfaces.
#+begin_src bash
sudo ip netns exec host3 ping -c1 192.168.65.1
sudo ip netns exec host3 ping -c1 192.168.65.2
#+end_src
And now, let's try from Host 1 to Host 2, and back.
#+begin_src bash
sudo ip netns exec host1 ping -c1 192.168.65.2
sudo ip netns exec host2 ping -c1 192.168.65.1
#+end_src
Finally, we can see that our pings reach, and if we look at the ARP
table you can confirm that the addresses match the address of the veth
device on Host 2.
#+begin_src bash
sudo ip netns exec host1 arp
Address HWtype HWaddress Flags Mask Iface
192.168.65.2 ether 46:ca:27:82:5a:c3 C veth1a
192.168.65.3 ether b6:66:d9:d0:4d:39 C veth1a
#+end_src
** Complete the set up with loopback interfaces
There's still something funny though, if a host tries to reach itself,
it can't yet, and that's because we never brought up the loopback interface.
#+begin_src bash
sudo ip netns exec host2 ip a show lo
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
#+end_src
We'll bring it up for all 3 hosts and we're in business.
#+begin_src sh
for i in 1 2 3
do
echo "============:: Host ${i}"
sudo ip netns exec host${i} ip link set lo up
for j in 1 2 3
do
sudo ip netns exec host${i} ping -c1 192.168.65.${j}
done
echo "======================"
done
#+end_src
** Cleaning up
All the resources should be in the network namespaces, so we should be
able to easily clean up by removing the namespaces.
#+begin_src sh
for i in 1 2 3
do
sudo ip netns delete host${i}
done
#+end_src

122
equinix/design/nimf-m2.org Normal file
View File

@@ -0,0 +1,122 @@
#+TITLE: NIMF Milestone 2
#+SUBTITLE: Authentication and Authorization
#+AUTHOR: Adam Mohammed
* Overview
This document discusses the authentication and authorization between Metal
and Fabric focussed on the customer's experience. We want to deliver a
seamless user experience that allows users to set up connections
directly from Metal to any of the Cloud Service Providers(CSPs) they
leverage.
* Authentication
** Metal
There are a number of ways to authenticate to Metal, but ultimately it
comes down to the mode that the customer wishes to use to access their
resources. The main methods are directly as a user signed in to a web
portal and directly against the API.
Portal access is done by having the OAuth flow which lets the browser
obtain a JWT that can be used to authenticate against the Metal
APIs. It's important to understand that the Portal doesn't make calls
as itself on behalf of the user, but the user themselves are making
the calls by way of their browser.
Direct API access is done either through static API keys issued to a
user, or a project. Integrations through tooling or libraries built
for the language are also provided.
** Fabric
* Authorization
** Metal
** Fabric
Option 4 - Asynchronous Events
Highlights:
- Fabric no longer makes direct calls to Metal, it only announces that the connection is ready
- Messages are authenticated with JWT
- Metal consumes the events and modifies the state of resources as a controller
Option 5 - Callback/Webhook
Highlights
Similar to Option 4, though the infrastructure is provided by Metal
Fabric instead emits a similarly shaped event that says connections state have changed
Its Metals responsibiity to consume that and respond accordingly
Changes Required
Fabric sends updates to this webhook URL
Metal consumes messages on that URL and handles them accordingly
Metal provides way to see current and desired state
Advantages
Disadvantages
* Documents
** Equinix Interconnections
Metal provided interconnections early on to give customers access to the
network capabilities provided by Fabric and Network Edge.
There currently two basic types of interconnections, a dedicated
interconnection and a shared one. The dedicated version as it sounds
uses dedicated port infrastructure that the customer owns. This is
often cost prohibitive so interconnections over Equinix owned shared
infrastructure fills that space.
The dedicated interconnection types have relatively simple logic in
the API relative to shared interconnections. A dedicated
interconnection gives you a layer 2 connection and that's all, the
rest is on the customer to manage.
Shared connections connect metal to other networks either through
layer 2 or layer 3.
Layer 2 interconnections are created using either the
=VlanFabricVCCreateInput= or the =SharedPortVCVlanCreateInput=. The
former provides the interconnection using service tokens, used by
Metal to poll the status of the interconnections. These allowed us to
provide customers with connectivity, but a poor experience because if
you look at the connection in Fabric, it's not clear how it relates to
Metal resources.
The =SharedPortVCVlanCreateInput= allows Fabric access to the related
network resources on the Metal side which means managing these network
resources on Fabric is a little bit easier. This type of
interconnection did some groundwork to bring our physical and logical
networks between Metal and Fabric closer together, but that's mostly
invisible to the customer, but enables us to build products on our
network infrastructure that weren't previously possible.
Currently, both methods of creating these interconnections exist,
until we can deprecate the =VlanFabricVCCreateInput=. The
=SharedPortVCVlanCreateInput= type is only capable of layer 2
interconnections to Amazon Web Services. This new input type allows
fabric to start supporting more layer 2 connectivity without requiring
any work on the Metal side. Once we reach parity with the connection
destinations of =VlanFabricVCCreateInput= we can deprecate this input
type.
Layer 3 interconnections are created by passing the
=VrfFabricVCCreateInput= to the interconnections endpoint. These
isolate customer traffic by routing table instead of through VLAN
tags.

View File

@@ -0,0 +1,11 @@
#+TITLE: Permissions Migration Home
#+AUTHOR: Adam Mohammed
#+DATE: September 18, 2024
* Initial Design Doc
* Test Plan
* Architecture Design Review doc
* Permissions Overview for Handbook

View File

@@ -0,0 +1,88 @@
#+TITLE: Permissions Redesign
#+AUTHOR: Adam Mohammed
* Overview
This document describes what granularity we'll have available for MVP
when using permissions-api as the policy decision point (PDP).
* Top-Level Resources
** User Based Resources
- User
- APIKeys (bound to user)
** Project Level Resources
- Project (Read/update/delete)
- Instances
- Appliances
- Reservations (aka Hardware Reservations)
- Document
- IP Reservation
- IP Address
- IP Assignment
- Virtual Network
- Virtual Circuit
- Interconnection (Read/update)
- VRF
- Membership
- Invitations
- BGP Sessions
- BGP Configs
- Project API Keys
*** Lower-tier resources
- BGPDynamicNeighbors authorizes through MetalGateway
- ElasticIps authorizes through MetalGateway
- VRFIPReservation authorizes through VRF
- VRFLearnedRoutes authorizes through VRF
- VRFBGPNeighbors authorizes through VRF
- VRFStaticRoutes authorizes Through VRF
- Authorizes through Instance:
- Actions (reboot/power-cycle) (create, list)
- Ip Assignments (create, list only)
- Traffic (index only)
- Termination (POST only)
- BGPSessions (CRUD)
- BGPNeighbors (index only)
- Bandwidth (index only)
- SSH-keys (index only)
- Diagnostics (Read only)
- Metadata (read only)
- Userdata (read only)
- Error reports (create, read)
** Organization Level Resources
- Organization
- Project (create-only)
- Interconnection (create/delete)
** Weird ones
BGP Config Requestss
2FA enforce
* Phase 2
We decided to just throw actions on organizations/projects/user
ok, so I can configure the check access to dump out the context I need.
For every controller + action, I need:
The resource type the permission check is on
The action name that the check requires
With that I can produce the policy that we need on the Permissions API side

View File

@@ -0,0 +1,286 @@
#+TITLE: Metal API Policy
#+AUTHOR: Adam Mohammed
* How to produce this information?
Using this snippet placed in =config/initializers/packet.rb=
#+begin_src ruby
def permission_logger
Class.new do
def initialize(db)
@db = db
end
def permissions_sql
<<-SQL
INSERT INTO metal_permissions (
controller_name, path, resource, action
) VALUES ( ?, ?, ?, ?);
SQL
end
def check_access(*args, **kwargs)
context = kwargs[:context]
controller = context[:controller]
path = controller.request.path
controller_action = "#{controller.class.name}##{controller.action_name}"
@db.execute(permissions_sql, [controller_action, path, args[1], args[2]])
true
end
end
end
def permissions_checker
::Authorization::PolicyEngine::IAMChecker.new(client: permission_logger.new(sqlite_db))
end
#+end_src
I then pushed a branch so the standard CI pipeline build would spit out
a DB with the results:
#+begin_src diff
modified .buildkite/pipeline.yaml
@@ -96,6 +96,8 @@ steps:
commands:
- /home/packet/api/.buildkite/script/parallel_test_setup.sh
- /home/packet/api/.buildkite/script/cucumber-container.sh
+ artifact_paths:
+ - 'test.db'
retry:
automatic:
- exit_status: "*"
@@ -109,6 +111,7 @@ steps:
env:
BUILD_NUMBER: ${BUILDKITE_BUILD_NUMBER}
API_BUILD_IMAGE: ${API_BUILD_IMAGE}
+ POLICY_ENGINE: "cancancan_wins"
- label: Rspec
key: "rspec"
@@ -117,6 +120,8 @@ steps:
commands:
- /home/packet/api/.buildkite/script/parallel_test_setup.sh
- /home/packet/api/.buildkite/script/rspec-container.sh
+ artifact_paths:
+ - 'test.db'
retry:
automatic:
- exit_status: "*"
@@ -132,6 +137,7 @@ steps:
env:
BUILD_NUMBER: ${BUILDKITE_BUILD_NUMBER}
API_BUILD_IMAGE: ${API_BUILD_IMAGE}
+ POLICY_ENGINE: "cancancan_wins"
- label: Build spec image
key: "spec-build"
#+end_src
If later you want to combine these dbs you can do so as follows:
1. Download =test.db= from the rspec step and name it =test-rspec.db=
2. Download =test.db= from the cucumber step and name it =test-cucumber.db=
3. Create the merged =test-full-suite.db=
#+begin_src bash
$ cp test-rspec.db test-full-suite.db
$ sqlite3 'test-full-suite.db'
sqlite> ATTACH 'test-cucumber' AS cuke
sqlite> BEGIN;
sqlite> INSERT INTO metal_permissions SELECT * FROM cuke.metal_permissions;
sqlite> COMMIT;
sqlite> DETACH cuke;
sqlite> .quit
#+end_src
* Organization actions
#+begin_src
metal_billing_information_get
metal_billing_information_update
metal_capability_list
metal_coupon_usage_list
metal_coupon_usage_redeem
metal_credit_create
metal_credit_delete
metal_credit_list
metal_discount_create
metal_enforce_2fa_create
metal_instances_listing_list
metal_interconnection_create
metal_interconnection_delete
metal_interconnection_get
metal_interconnection_list
metal_interconnection_port_get
metal_interconnection_port_list
metal_interconnection_update
metal_interconnection_virtual_circuit_create
metal_interconnection_virtual_circuit_list
metal_interconnection_virtual_circuit_update
metal_invitation_create
metal_invitation_delete
metal_invitation_get
metal_invitation_list
metal_invitation_resend
metal_invitation_update
metal_ip_address_delete
metal_ip_address_get
metal_lab_get
metal_leave_organization_create
metal_member_delete
metal_member_list
metal_member_update
metal_membership_delete
metal_membership_update
metal_organization_create
metal_organization_delete
metal_organization_get
metal_organization_logos
metal_organization_update
metal_payment_get
metal_payment_method_create
metal_payment_method_delete
metal_payment_method_get
metal_payment_method_list
metal_payment_method_update
metal_project_create
metal_search_search_plans
metal_tier_inquiry_create
metal_vendor_list
#+end_src
* Project actions
#+begin_src
metal_acl_list
metal_activate_create
metal_allocation_list
metal_batch_delete
metal_batch_get
metal_batch_list
metal_bgp_config_delete
metal_bgp_config_request_create
metal_bgp_config_update
metal_bgp_config_view
metal_bgp_dynamic_neighbor_create
metal_bgp_dynamic_neighbor_list
metal_bgp_neighbor_list
metal_bgp_session_create
metal_bgp_session_delete
metal_bgp_session_get
metal_bgp_session_list
metal_bgp_session_update
metal_discount_create
metal_dn_create
metal_dn_list
metal_ecx_connection_create
metal_ecx_connection_list
metal_error_report_create
metal_error_report_get
metal_event_alert_configuration_create
metal_event_alert_configuration_get
metal_event_alert_configuration_update
metal_firmware_set_get
metal_global_bgp_range_list
metal_hardware_reservation_get
metal_health_get
metal_instance_action_create
metal_instance_action_list
metal_instance_batch_create
metal_instance_create
metal_instance_delete
metal_instance_get
metal_instance_list
metal_instance_metadatum_show_by_ip
metal_instance_password_create
metal_instance_update
metal_instances_listing_list
metal_interconnection_create
metal_interconnection_list
metal_interconnection_virtual_circuit_create
metal_interconnection_virtual_circuit_list
metal_interconnection_virtual_circuit_update
metal_invitation_create
metal_invitation_list
metal_ip_address_delete
metal_ip_address_get
metal_ip_address_update
metal_ip_assignment_create
metal_ip_assignment_list
metal_ip_availability_available
metal_ip_reservation_create
metal_ip_reservation_list
metal_ip_reservation_request_create
metal_ip_reservation_update
metal_leave_project_create
metal_license_activation_get
metal_license_create
metal_license_delete
metal_license_get
metal_license_list
metal_license_update
metal_membership_delete
metal_membership_get
metal_membership_list
metal_membership_update
metal_metal_gateway_create
metal_metal_gateway_delete
metal_metal_gateway_elastic_ip_create
metal_metal_gateway_elastic_ip_list
metal_metal_gateway_get
metal_metal_gateway_list
metal_metering_limit_create
metal_move_create
metal_project_api_key_create
metal_project_api_key_list
metal_project_create
metal_project_delete
metal_project_get
metal_project_update
metal_reservation_create
metal_reservation_get
metal_reservation_list
metal_screenshot_get
metal_spot_market_request_create
metal_spot_market_request_delete
metal_spot_market_request_get
metal_spot_market_request_list
metal_subscribed_event_create
metal_subscribed_event_delete
metal_subscribed_event_get
metal_subscribed_event_list
metal_subscribed_events_all_create
metal_subscribed_events_all_delete
metal_traffic_list
metal_transfer_request_create
metal_transfer_request_delete
metal_transfer_request_get
metal_transfer_request_update
metal_userdatum_show_by_ip
metal_virtual_network_create
metal_virtual_network_delete
metal_virtual_network_get
metal_virtual_network_list
metal_virtual_network_update
metal_vrf_create
metal_vrf_delete
metal_vrf_get
metal_vrf_list
metal_vrf_route_create
metal_vrf_route_delete
metal_vrf_route_get
metal_vrf_route_list
metal_vrf_route_update
metal_vrf_update
#+end_src
* User actions
#+begin_src
metal_discount_create
metal_metering_limit_create
metal_sales_report_get
metal_user_avatars
metal_user_force_verify
metal_user_get
metal_user_update
#+end_src

View File

@@ -0,0 +1,132 @@
#+TITLE: Testing IAM-Runtime checks for Metal API
#+AUTHOR: Adam Mohammed
* What's changed
In the Metal API, there's now the ability to run different authorization
policy engines. We have two engines, the cancancan engine and the
permissions API engine. We added the ability to run these both during
the span of a request while explicitly naming one the source of truth
for the ultimate authorization outcome.
* What are we trying to get out of this test?
We want to start sending authorization checks through permissions API,
but not break existing behavior. We need a way to validate our
permissions checks through the runtime behave as we expect.
The firts barrier to making sure we're not breaking production is to
run the combined policies for all CI test cases. This proves for the
tested code paths that we're at least able to serve requests.
This test plan deals with validating the policy definition and
integration with Permissions API in production.
* Stages of testing
There will be a few stages of rolling this out to careful since we're
changing a fundamental piece of the architecture.
First, we'll run a smoke test suite against a canary which is separate
from production traffic.
Then, if the metrics look acceptable there, we'll roll this out to
production, while keeping an eye specifically on latency and number of
403s.
Then, we'll monitor for discrepancies between the models and address
them. This will be a bulk of the testing time, as we'll need a long
enough duration to receive an accurate sample size for operations
customers perform.
Finally, we can move over to only using the runtime for authorization
decisions.
The next sections describe the test setup, what we'll monitor at each
stage, and success criteria, and rollback procedure.
** Initial Canary
In this setup, we'll have a separate ingress and deployment for the
Metal API. This will allow us to exclusively route traffic to the
backend configured to use the IAM runtime, while leaving production
traffic using the cancan policy only.
The purpose of doing this is to try and find any hidden bugs that
would cause an outage.
We'll test this by running the terraform CI tests against the canary image.
The success criteria for this step are:
- CI test passes
- CI test duration does not increase significantly compared to usual
runtimes (canary CI runtime <= 150% normal runtime)
Typical CI tests uses API keys instead of an Identity API JWT, which would be
necessary for a permissions API check, so I'll need to modify
terraform to pull the credentials appropriately.
Rolling back here just involves cleaning up canary resources, and has
no impact on customer experience.
** Production Roll-out
In this setup, we'll set the appropriate configuratoin for all HTTP
frontend pods. This will cause all requests to pass through both
policy engines and start generating trace data.
The purpose of this stage is to start getting real production work
passing through the permissions checks, but not yet starting to affect
the result of a request.
The testing in this stage is just to see that the frontends are
healthy, and that immediately serving a spike of 403s. The rest of the
data will come from next stage of the test plan, Monitoring.
Rolling back here is restarting the API with `POLICY_ENGINE` unset,
which defaults to only using cancancan.
** Monitoring
The setup here is no different than the previous stage, but it is
likely a bulk of the time, so I've separated it out. Here we'll be
monitoring tracing data to look for differences in authorization
decisions between the two engines.
The main failure we expect here is that the policy results differ,
which means either our definition of the equivalent Metal API roles in
Permissions API need to be updated, or potentially, that the logic
that does the Metal API check is broken.
We can detect and I will create a HC dashboard to show authorization
decisions that don't match, which can be due to the following reasons:
- Policies are different
- Computed incorrect tenant resource
- Couldn't resolve the tenant resource
We can then address those issues on a case-by-case basis
We're also interested in the overal latency impact:
- P95 Runtime Authorization check latency matches or is better than published
Permission API latency
Completion criteria:
- 100% accuracy on runtime checks that have been performed
There's probably a better metric here for determining "completeness",
but as a goal, driving discrepancies down toward 0, is a good
indicator that we're ready to cut-over completely.
As we close the gap between the two models, we might decide that some
error is tolerable.
** Runtime is the source of truth
Once we're here, we're happy with how the policy model is performing,
we're ready to start using the runtime as the source of truth. This is
just a configuation change to set =POLICY_ENGINE= to =prefer_runtime= or
=runtime_only=.
Prefer runtime uses the runtime check where possible. We still need to
address the staff authorizatoin model, so for that, we'll fall back to
existing cancancan policies.
At this point, customers authenticating with an exchanged token will
be served responses based on the permissions API policy.

View File

@@ -0,0 +1,17 @@
#+TITLE: Implementing endpoint to expose learned routes
#+AUTHOR: Adam Mohammed
#+DATE: October 30, 2023
I asked Tim about what the difference is between
https://deploy.equinix.com/developers/api/metal#tag/VRFs/operation/getBgpDynamicNeighbors
and what's available in Trix.
The first is just data configured by the customer manually.
Trix exposes the learned routes from peers
I then asked if it made sense to expose the data as part of:
https://deploy.equinix.com/developers/api/metal#tag/VRFs/operation/getVrfRoutes
and the answer I got was probaly

View File

@@ -0,0 +1,10 @@
#+TITLE: Backlog refinement
#+AUTHOR: Adam Mohammed
#+DATE: November 25, 2025
* How to use this file
This is where I mark tickets that need to be discussed in refinement
* TODO MID-603 - Non equinix staff account audit
* TODO MID-604

View File

@@ -0,0 +1,181 @@
#+TITLE: LBaaS Testing
#+AUTHOR: Adam Mohammed
#+DATE: August 30, 2023
* API Testing
:PROPERTIES:
:header-args:shell: :session *bash3*
:header-args: :results output verbatim
:END:
#+begin_src shell
PS1="> "
export PAPI_KEY="my-user-api-key"
export PROJECT_ID=7c0d4b1d-4f21-4657-96d4-afe6236e361e
#+end_src
First let's exchange our user's API key for an infratographer JWT.
#+begin_src shell
export INFRA_TOK=$(curl -s -X POST -H"authorization: Bearer $PAPI_KEY" https://iam.metalctrl.io/api-keys/exchange | jq -M -r '.access_token' )
#+end_src
#+RESULTS:
If all went well, you should see a json object containing the =loadbalancers= key from this block.
#+begin_src shell
curl -s -H"Authorization: Bearer $INFRA_TOK" https://lb.metalctrl.io/v1/projects/${PROJECT_ID}/loadbalancers | jq -M
#+end_src
#+RESULTS:
#+begin_example
{
"loadbalancers": [
{
"created_at": "2023-08-30T18:26:19.534351Z",
"id": "loadbal-9OhCaBNHUXo_f-gC7YKzW",
"ips": [],
"name": "test-graphql",
"ports": [
{
"id": "loadprt-8fN2XRnwY8C0SGs_T-zhp",
"name": "public-http",
"number": 8080
}
],
"updated_at": "2023-08-30T18:26:19.534351Z"
},
{
"created_at": "2023-08-30T19:55:42.944273Z",
"id": "loadbal-pLdVJLcAa3UdbPEmGWwvB",
"ips": [],
"name": "test-graphql",
"ports": [
{
"id": "loadprt-N8xRozMbxZwtG2yAPk7Wx",
"name": "public-http",
"number": 8080
}
],
"updated_at": "2023-08-30T19:55:42.944273Z"
}
]
}
#+end_example
** Creating a LB
Here we'll create an empty LB with our newly exchanged API key.
#+begin_src shell
curl -s \
-H"Authorization: Bearer $INFRA_TOK" \
-H"content-type: application/json" \
-d '{"name": "test-graphql", "location_id": "metlloc-da", "provider_id":"loadpvd-gOB_-byp5ebFo7A3LHv2B"}' \
https://lb.metalctrl.io/v1/projects/${PROJECT_ID}/loadbalancers | jq -M
#+end_src
#+RESULTS:
:
: > > > {
: "errors": null,
: "id": "loadbal-ygZi9cUywLk5_oAoLGMxh"
: }
All we have is an ID now, but eventually we should get an IP back.
#+begin_src shell
RES=$(curl -s \
-H"Authorization: Bearer $INFRA_TOK" \
https://lb.metalctrl.io/v1/projects/${PROJECT_ID}/loadbalancers | tee )
export LOADBALANCER_ID=$(echo $RES | jq -r '.loadbalancers | sort_by(.created_at) | reverse | .[0].id' )
echo $LOADBALANCER_ID
#+end_src
#+RESULTS:
:
: > > > loadbal-ygZi9cUywLk5_oAoLGMxh
** Create the backends
The load balancer requires a pool with an associated origin.
#+begin_src shell
export POOL_ID=$(curl -s -H"Authorization: Bearer $INFRA_TOK" \
-H"content-type: application/json" \
-d '{"name": "pool9", "protocol": "tcp"}' \
https://lb.metalctrl.io/v1/projects/${PROJECT_ID}/loadbalancers/pools | jq -r '.id')
echo $POOL_ID
#+end_src
#+RESULTS:
:
: > > > loadpol-hC_UY3Woqjfyfw1Tzr5R2
Let's create a LB that points to =icanhazip.com= so we can see how we're proxying
#+begin_src shell
export TARGET_IP=$(dig +short icanhazip.com | head -1)
data=$(jq -M -c -n --arg port_id $POOL_ID --arg target_ip "$TARGET_IP" '{"name": "icanhazip9", "target": $target_ip, "port_id": $port_id, "port_number": 80, "active": true}' | tee )
curl -s \
-H"Authorization: Bearer $INFRA_TOK" \
-H"content-type: application/json" \
-d "$data" \
https://lb.metalctrl.io/v1/loadbalancers/pools/${POOL_ID}/origins | jq -M
#+end_src
#+RESULTS:
:
: > > > > > {
: "errors": null,
: "id": "loadogn-zfbMfqtFKeQ75Tul52h4Q"
: }
#+begin_src shell
curl -s \
-H"Authorization: Bearer $INFRA_TOK" \
-H"content-type: application/json" \
-d "$(jq -n -M -c -n --arg pool_id $POOL_ID '{"name": "public-http", "number": 8080, "pool_ids": [$pool_id]}')" \
https://lb.metalctrl.io/v1/loadbalancers/${LOADBALANCER_ID}/ports | jq -M
#+end_src
#+RESULTS:
:
: > > > {
: "errors": null,
: "id": "loadprt-IVrZB1sLUfKqdnDULd6Ix"
: }
** Let's try out the LB now
#+begin_src shell
curl -s \
-H"Authorization: Bearer $INFRA_TOK" \
-H"content-type: application/json" \
https://lb.metalctrl.io/v1/loadbalancers/${LOADBALANCER_ID} | jq -M
#+end_src
#+RESULTS:
#+begin_example
> > {
"created_at": "2023-08-30T20:10:59.389392Z",
"id": "loadbal-ygZi9cUywLk5_oAoLGMxh",
"ips": [],
"name": "test-graphql",
"ports": [
{
"id": "loadprt-IVrZB1sLUfKqdnDULd6Ix",
"name": "public-http",
"number": 8080
}
],
"provider": null,
"updated_at": "2023-08-30T20:10:59.389392Z"
}
#+end_example

105
equinix/test-plan-staff.md Normal file
View File

@@ -0,0 +1,105 @@
# Table of Contents
1. [Staff operator portal](#org992e17b)
1. [Primary problem](#org538f8d6)
2. [Auxilliary problems](#org4412c9f)
3. [Analysis](#org9b704f7)
4. [Work to be done](#orgcad9c50)
<a id="org992e17b"></a>
# Staff operator portal
<a id="org538f8d6"></a>
## Primary problem
We don't have granular control over roles granted to "staff" users.
<a id="org4412c9f"></a>
## Auxilliary problems
- We don't support SSO / separate login path
- We don't automatically offboard staff
- Acting as a Staff requires an additional credential
<a id="org9b704f7"></a>
## Analysis
We've already implemented authorization checks using the IAM runtime,
and can extend that for staff portal usage. The Metal API doens't have
a clean separation between APIs for staff users and APIs for
customers. In some cases, depending on how you auth an endpoint can
respond differently.
The way you auth today with an Auth0 JWT or API key identifies you as
the user that we have represented in the Metal DB. Additionally, you
can supply a consumer token, and an additional header \`x-packet-staff\`
to enable access to staff features and endpoints.
In the next iteration of the staff operator portal, we would like to
be able to give staff users more fine-grained permissions, so they
only have access to the resources and actions they need to carry out
their jobs.
Miles and team have gone through and identified use cases and
potential roles that we want users to be able to have.
The identity team's role in this is determining how best to structure
the policy so that we can support the roles as well as changes to the
roles with little intervention.
With EIS services, such as token exchange and permissions api, we have
a much more dynamic system for dynamically creating and applying
permissions.
By default authenticating to the Metal API should treat you as any
other customer. If you want to perform the request as if you were a
staff user, additional context would be necessary. Right now that
additional context is the consumer token and packet staff header. With
permissions API, we would no longer need the consumer token, and we
could get by with just the packet staff header. The staff header
purpose would just be to indicate to the backend that the user wants
to act in the staff context.
SpiceDB supports passing additional context within access checks to
perform a limited form of attribute-based access control. In our case,
a sample schema may look like this: <https://play.authzed.com/s/kAikrVuvYYJ7/schema>
In this example we have a resource \`metlins-foo\` owned by a tenant
\`child0\`, which is owned by the \`root\` tenant. There are two users
\`adammo\` and \`mason\`. The user \`adammo\` is a member of the tenant that
owns the instance resource, and as a result can view details of the
instance without any additional context.
The user \`mason\` is a member of the root tenant, and if that user
tries to view details they get a response back from spicedb indicating
that the check is caveated. If the correct context is sent with the
check, viewing details is allowed, otherwise it's denied.
This setup allows us to act as both customer and staff users without
needing to change relationships on the fly.
<a id="orgcad9c50"></a>
## Work to be done
In order to implement this for the staff portal, some additional work
is necessary:
- Permissions API needs to add support for allowing caveated access
checks
- The IAM Runtime spec needs support for that API
- Permissions API policy needs to be updated with caveated policy
- Relationships need to be published with required caveats
- Metal API needs to support passing additional context to the IAM
runtime for the caveat checks

View File

@@ -0,0 +1,81 @@
* Staff operator portal
** Primary problem
We don't have granular control over roles granted to "staff" users.
** Auxilliary problems
- We don't support SSO / separate login path
- We don't automatically offboard staff
- Acting as a Staff requires an additional credential
** Analysis
We've already implemented authorization checks using the IAM runtime,
and can extend that for staff portal usage. The Metal API doens't have
a clean separation between APIs for staff users and APIs for
customers. In some cases, depending on how you auth an endpoint can
respond differently.
The way you auth today with an Auth0 JWT or API key identifies you as
the user that we have represented in the Metal DB. Additionally, you
can supply a consumer token, and an additional header `x-packet-staff`
to enable access to staff features and endpoints.
In the next iteration of the staff operator portal, we would like to
be able to give staff users more fine-grained permissions, so they
only have access to the resources and actions they need to carry out
their jobs.
Miles and team have gone through and identified use cases and
potential roles that we want users to be able to have.
The identity team's role in this is determining how best to structure
the policy so that we can support the roles as well as changes to the
roles with little intervention.
With EIS services, such as token exchange and permissions api, we have
a much more dynamic system for dynamically creating and applying
permissions.
By default authenticating to the Metal API should treat you as any
other customer. If you want to perform the request as if you were a
staff user, additional context would be necessary. Right now that
additional context is the consumer token and packet staff header. With
permissions API, we would no longer need the consumer token, and we
could get by with just the packet staff header. The staff header
purpose would just be to indicate to the backend that the user wants
to act in the staff context.
SpiceDB supports passing additional context within access checks to
perform a limited form of attribute-based access control. In our case,
a sample schema may look like this: https://play.authzed.com/s/kAikrVuvYYJ7/schema
In this example we have a resource `metlins-foo` owned by a tenant
`child0`, which is owned by the `root` tenant. There are two users
`adammo` and `mason`. The user `adammo` is a member of the tenant that
owns the instance resource, and as a result can view details of the
instance without any additional context.
The user `mason` is a member of the root tenant, and if that user
tries to view details they get a response back from spicedb indicating
that the check is caveated. If the correct context is sent with the
check, viewing details is allowed, otherwise it's denied.
This setup allows us to act as both customer and staff users without
needing to change relationships on the fly.
** Work to be done
In order to implement this for the staff portal, some additional work
is necessary:
- Permissions API needs to add support for allowing caveated access
checks
- The IAM Runtime spec needs support for that API
- Permissions API policy needs to be updated with caveated policy
- Relationships need to be published with required caveats
- Metal API needs to support passing additional context to the IAM
runtime for the caveat checks

View File

@@ -0,0 +1,44 @@
# otel-collector-config.yaml
receivers:
otlp:
protocols:
grpc: # port 4317
http: # port 4318
processors:
batch:
filter/auditable:
spans:
include:
match_type: strict
attributes:
- key: auditable
value: "true"
transform/customer-facing:
trace_statements:
- context: resource
statements:
- 'keep_keys(attributes, ["service.name"])'
- context: scope
statements:
- 'set(name, "equinixWatch")'
- 'set(version, "1.0.0")'
- context: span
statements:
- 'keep_keys(attributes, ["http.route", "http.method", "http.status_code", "http.scheme", "http.host", "user.id", "http.user_agent"])'
- 'set(name, attributes["http.route"])'
exporters:
file:
path: /data/metrics.json
service:
pipelines:
traces:
receivers: [otlp]
processors:
- filter/auditable
- transform/customer-facing
exporters: [file]

View File

@@ -0,0 +1,94 @@
#+TITLE: Integrating Equinix Metal API with Equinix Watch
#+AUTHOR: Adam Mohammed
* Problem
Equinix Watch has defined the format for which they want to ingest
auditable events. They chose the OTLP as the protocol for
ingesting these events from services restricting their ingestion to
just the logging signal.
Normally when sending data to a collector, you would make use of the
OpenTelemetry libraries to make it easy to grab metadata about the
request and surrounding environment, without needing to manually
cobble that data together. Unfortunately, using OTEL logging as the
only signal that Equinix Watch accepts, makes adoption needlessly
painful. Ruby does not have a stable client library for OTEL logs, and
neither does Golang.
Most of the spec provided by EquinixWatch does not actually relate to
the log that we would like to provide to the customer. OTEL Logging
aims to make this simple by using the Baggage and Context APIs to
enrich the log records with information about the surrounding
environment and context. Again, the implementations for these are
incomplete and not production ready.
Until the OTEL libraries provide support for the context and baggage
propogation in the Logs API/SDK, this will data will need to be
extracted and formatted specifically for Equinix Watch, meaning the
burden of integration is higher than it needs to be. If we end up
doing this, we'll probably just fetch the same data from the span
attributes anyway, to keep things consistent.
There's absolutely no reason to do this work when we can add the logs
in a structured way to the trace and pass that through to their custom
collector. By doing this we don't need to wait for the OTEL libraries
to provide logging implementations that do what traces already
provide.
The only reason I can see not to do this is that it makes Equinix
Watch have to handle translating trace information to a format that
can be delivered to their end targets. I'd argue that's going to need
to happen anyway, so why not make use of all the wonderful tools we
have to enrich the data you have as input, so you can build complete
and interesting audit logs for you end user.
* Concerns
- Alex: Yeahhhh I've gotta say I'm uncomfortable making our existing
OTEL collector, which is right now part of our internal tooling, and
making it part of the critical path for customer data with Equinix
Watch.
I don't understand this, of course you're going to be in your
critical path. I'm not saying to use your collector as the ONLY
collector, this is why we even have collectors. We are able to
configure where the data are exported.
- Alex: IMO internal traces are implementation details that are
subject to change and there are too many things that could go
wrong. What happens if the format of those traces changes due to
some library upgrade, or if there's memory pressure and we start
sampling events or something?
Traces being implementation details - like audit logs? There's a
reason we use standard libraries to instrument our traces. These
libraries follow OTEL Semantic Conventions so we have stable and
consistent span attributes that track data across services.
Memory pressure, this isn't solved by OLTP at all, in fact
collectors will refuse spans if they're experiencing memory pressure
to prevent getting OOMKilled. This is not an application concern,
this is an monitoring concern. You should know if your collector is
- Alex: In my experience, devs in general have a higher tolerance for
gaps and breakage in their internal tooling than what I'm willing to
have for customer-facing audit logs.
This is just poor form. If you don't trust the applications that
integrate with your application, what do you trust?
- Alex: I think customer-facing observability is net-new functionality
and, for the time being, I'm OK with putting a higher burden on
applications producing that data than "flip a flag in the collector
to redirect part of the firehose to Equinix Watch
Net-new - sure, I agree
Higher burden on applications producing the data - why though? we
can provide you a higher quality data source already instead of
hand-rolling an implementation to the the logs signal
"flip a flag in the collector" - I think this just shows illiteracy,
but we are able to control what parts are shipped to your fragile
collector.

View File

@@ -0,0 +1,58 @@
#+TITLE: Year in review
#+AUTHOR: Adam Mohammed
* January
- Setting up environments for platform to test auth0 changes against portal
- Created a golang library to make it easier to build algolia indexes
in our applications. Used by bouncer, and quantum to provide nice searchable
interfaces on our frontends.
- Implemented the initial OIDC endpoints for identity-api in LBaaS
* February
- Wrote helm charts for identity-API
- Bootstrapped initial identity-api deployment
- Discussed token format for identity-api
- Adding algolia indexing to quantum resources
* March
- Drafted plan for upgrading the monolith from Rails 5 to Rails 6 and Ruby 2 to Ruby 3.
- Implemented extra o11y where we needed for the upgrade
- Used gradual rollout strategy to build confidence
- Upgraded CRDB and documented the process
* April
- Added testing to exoskeleton - some gin tooling we use for go services
* May
- Started work on the ResourceOwnerDirectory
- Maintenance on exoskeleton
* June
- More ROD work
- Ruby 3 upgrade
- Added service to service clients for coupon
- Testing LBaaS with decuddle
- Added events to the API
* July
- Deploy Resource Owner Directory
* August
- Get ready for LBaaS Launch
* September
- Implemented queue scheduler
* Talks:
- Session Scheduler
- Static analysis on Ruby
- API Auth discussion with using identity-api
- API monoitoring by thinking about what we actually deliver
- Deep diving caching issues from #_incent-1564
- Recorded deployment and monitoring of API
- Monitoring strategy for the API Rails/Ruby Upgrades
- CRDB performance troubleshooting
* Docs:

View File

@@ -0,0 +1,138 @@
* Goal: Expand our Market - Lay the foundation for product-led growth
In the Nautilus the biggest responsibility we have is the monolith, and as we've added people to the team, we're starting to add services that are new logic to services outside of the monolith. In order to make this simple, and reduce maintenance burden, I've created exoskeleton and algolyzer, which are go libraries that we can use to develop go services a bit more quickly.
Exoskeleton provides a type-safe routing layer built on top of Gin, and bakes in OTEL so it's easy for us to take our services from local development to production ready.
Algolyzer makes it easier to keep updating algolia indexes happen out of the request span, to keep latency low, while still making sure our UIs are able to be easily searched for relevant objects.
Additionally, I have made a number of improvements to our core infrastructure:
- Improving monitoring of our application to make major upgrades less scary
- Upgrading from Rails 5 to Rails 6
- Upgrade from Ruby 2 to Ruby 3
- Deploying and performing regular maintenance on our CockroachDB cluster
- Diagnose anycast routing issues with our CRDB deployment that led to unexpectedly high latency, which resulted in changing the network from equal path routing to prefer local.
With these changes we're able to keep moving toward keeping the lights on while allowing us to experiment cheaply with common infra needed for smaller services.
* Goal: Build the foundation - A market-leading end-to-end user experience
As we started to deliver LBaaS, Infratographer had an entirely
different opinion on how to manage users and resource ownership, and I
created a GraphQL service to bridge the gap between infratographer
concepts and metal concepts, so when a customer uses the product,
it'll seem familiar. The metal API also emits events that can be
subscribed to over NATS to get updates for things such as organization
and project membership changes.
In order to accomplish this it meant close collaboration with the
identity team to help establish the interfaces and decide on who is
responsible for what parts. Load balancers can now be provisioned and
act as if they belong to a project, even though the system of record
lies completely outside of the Metal API.
VMC-E exposed that we had ordering issues in our VLAN assignments
portion of the networking stack. I worked with my team mates and SWNet
to improve the situation. I designed and implemented a queuing
solution that allows us to queue asynchronous tasks that are order
dependent on queues with a single consumer. We've already gotten
feedback from VMC-E and other customers that the correctness issues
with VLAN assignment have been solved, and we don't need to wait for a
complete networking overhaul from Orca to fix it. There are more
opportunities to target issues in our networking stack that suffer
from ordering issues with this solution.
For federated SSO, I was able to help keep communication between
Platform Identity, Nautilus and Portals flowing smoothly by
documenting exactly what was needed to get us in a position to onboard
our first set of customers using SSO. I used my knowledge of OAuth2 an
OpenIDConnect and broke down the integration points in a document
shared between these teams so it was clear what we needed to do. This
made it easier to commit and deliver within the timeframe we set.
not networking specific
nano metal
audit logging
* Goal: DS FunctionalPriorities - Build, socialize, and execute on plan to improve engineering experience
Throughout this year, I've been circulating ideas in writing and ins
hared forums more often. Within the nautilus team I did 8 tech-talks
to share ideas and information with the team and to solicit
feedback. I also wrote documents for collaborating with other teams
mainly for LBaaS (specifically around how it integrates with the
EMAPI) and federated SSO.
- CRDB performance troubleshooting
I discussed how I determined that anycast routing was not properly
weighted, and my methodology for designing tests to diagnose the issue.
- Monitoring strategy for the API Rails/Ruby Upgrades
Here I discussed how we intended to do these upgrades in a way that
built confidence on top of the confidence we got from our test
suites by measuring indicators of performance.
- Recorded deployment and monitoring of API
As we added more people to the team, recording this just made it
easier to have something we could point to for an API deployment. We
also have this process documented in the repo.
- Deep diving caching issues from #_incent-1564
We ran into a very hard to reproduce error where a users accessing
the same organization with different users were returned the same
list of organizations/projects regardless of access. Although, the
API prevented actual reads to the objects that the user didn't have
proper access to, serving the wrong set of IDs produced unexpected
behavior in the Portal. It took a long time to diagnose this, and
then I discussed the results with the team.
- API monitoring by thinking about what we actually deliver
Related to the rails upgrades, being able to accurately measure the
health of the monolith requires periodically re-evaluating if we're
measuring what matters.
- API Auth discussion with using identity-api
Discussion on the potential uses for identity-api in a
service-to-service context that the API uses quite frequently as we
build functionality outside of the API.
- Static analysis on Ruby
With a dynamically typed language, runtime exceptions are no fun,
but some static analysis goes a long way. In this talk I explained
how it works at the AST level and how we can use this to enforce
conventions that we have adopted in the API. As an action item, I
started enabling useful "cops" to prevent common logic errors in
ruby.
- Session Scheduler
Here I discussed the problem and the solution that we implemented
to prevent VLANs from being in inconsistent states when assigned and
unassigned quickly. The solution we delivered was generic, and
solved the problem simply, and this talk was to shine some light on
the new tool that the team has to use for ordering problems.
* Twilio account
always assisting the team
help new joinees to ramp up fast
participate in interviews
easy to work with across teams
clear communication
able to navigate
relations with delivery
not only engineering - product, devrel

View File

@@ -0,0 +1,52 @@
#+TITLE: Notes about Zorbik Wyrdweave
#+AUTHOR: Adam Mohammed
#+DATE: January 4, 2025
* Character
Zorbik Wyrdweave is the son of Thurgo Wyrdweave and Yari Wyrdweave.
What houses were they in?
Thurgo was in Ravenclaw
Yari was in Hufflepuff
** [0/0] Background Questions
- [ ] Who are their parents
Thurgo is Zorbik's dad, and he was in
Ravenclaw as well. He did decent in school, and after started
working at Gringots after his friend told him the real magic was
with Gold. Zorbik knows his dad is passionate about making money but
sees that it doesn't fulfil him and doesn't want to be like Thurgo.
He does admire that his dad has been into herbology and has always
had a small garden behind their home.
Yari is Zorbik's mother, she was excellent in school and was able to
travel to other schools to learn about different customs and ways of
thinking about Wizarding. She's always taught Zorbik how to be
compassionate and how to be considerate of other's views.
Zorbik is the only child, but grew up with lots of other part-goblin
kids in his hometown. Growing up with a bunch of other kids from part
goblin families, he learned to scheme and plan, mostly for mischief.
Physical Traits: 3' 10" - 115 pounds - Mainly in Chest and Biceps, never did leg day
- [ ] Do they have any siblings?
- [ ] Personality Traits
Zorbik grew up as an only-child, but in a tight-knit community of wizards that descended from Goblins. As part of living in that town he learned how to make friends quickly, as well as how to scheme and plan to get into all forms of mischief. After leaving his hometown to go to school, he's become anxious when alone since he's used to having his friends close.
Zorbik is the son of Thurgo and Yari Wyrdweave, who also met at Hogwarts.
Thurgo is a financial advisor at Gringotts, which he has been at since he graduated school. His friend convinced him it was the path to riches and he was instantly sold. Thurgo hasn't yet come into those promised riches but firmly believes they're just around the corner. He's a tired old man now, and Zorbik doesn't believe his father is going to get the riches he's been chasing for so long, and is weary about following in his footsteps. Zorbik does enjoy spending time with Thurgo in the garden, which is the one thing Thurgo does to keep himself sane while he dreams of his life after he's made it. Zorbik's been able to craft a few basic potions with ingredients sourced from that garden.
Yari is an intelligent Witch, who did so well in school that she became a student ambassador that travelled to other wizarding schools to form alliances. She's got friends across the world, and now uses her wits as a Diplomat to maintain peace in the Wizarding world and keeping things in harmony with the Muggles. Yari's spent lot of time teaching Zorbik the value of understanding your friends and enemies.
Personality Traits:
- Friendly
- Likes trouble and mischief
- Good at reading people's true intentions
- Distrusts people chasing riches
- Gets anxious when alone

34
home/lab-organization.org Normal file
View File

@@ -0,0 +1,34 @@
#+TITLE: Lab Organization
#+AUTHOR: Adam Mohammed
* Requirements
| Component | Software | # Deployed |
|-------------------+------------+------------|
| Metrics Database | Prometheus | 1 |
| Metrics Dashboard | Grafana | 1 |
| K8s Control Plane | Talos | 1 |
| K8s Worker Nodes | Talos | 1 |
| Git Repository | Forgejo | |
- Metrics Database Prometheus
- Metrics Dashboard Grafanaa
* Log
** Session 1
DEADLINE: <2025-01-11 Sat>
- Create FreeBSD VM
32 GB OS DISK
32 GB Workspace DISK
using Pkg-base instead of freebsd-update
Getting Bastille installed
*** TODO Get Traefik installed
*** TODO Get ForgeJo installed
*** TODO Get prometheus installed
*** TODO Get grafana installed
*** TODO Get node-exporter installed on crab
*** TODO I should re-assign this a static IP and then con
*** TODO Get PF installed

35
home/network-design.org Normal file
View File

@@ -0,0 +1,35 @@
#+TITLE: Network Design
#+AUTHOR: Adam Mohammed
* Gear
- Netgate SG-2100
- Netgear RAX80 (AP mode)
- 2 4 port switches
- TP-Link EAP610 (AP mode)
* Requirements
What do I have that need addresses
- TV
- Printer
- WIFI switches / bulbs
- Work Laptops
- Work Phones
- Guest Devices
- HomeAssistant
- Adguard
- Personal Laptops
- Phones
- NAS
- Lab
- virtual Machines / Development
* Let's split /24 into 4 networks
10.28.1.0/26
10.28.1.64/26
10.28.1.128/26
10.28.1.192/26

18
home/projects.org Normal file
View File

@@ -0,0 +1,18 @@
#+TITLE: Home Projects
#+AUTHOR: Adam Mohammed
* Problems
- I would like to have better tool organization in the basement
- Pegboard is one solution, but it's not very space effective
- Shelving and drawers can be useful, but requires planning
so it doesn't just become a mess.
- Hanging the Sail
- requires metal posts
* Weekend off plan
- Rent U-Haul
- Get Wood
- Get Circular Saw
- Get Hammer Drilll
- Get Plywood

View File

@@ -0,0 +1,47 @@
#+TITLE: Mobility Journal
* Learning about foot and hip mobility
+ [[https://www.reddit.com/r/FootFunction/comments/1319ecu/general_info_resources_for_understanding/][r/FootFunction general info]]
+ [[https://www.youtube.com/watch?v=31FPKiJVO7k][The Truth About Hip Internal Rotation - Connor Harris]]
+ [[https://www.youtube.com/watch?v=MIZGsiklYHA][The best place to start improving your hip internal rotation - The Musculoskeletal Clinic]]
Bought Connor Harris Program to start somewhere
It's a 8 week course.
** Beginner Body Restoration
*** Phase 1
- **Sidelying Decompression with breathing focus**
This is a 90 90 on your side with a towel under top of pelvis
2+ sets x 8 breaths per side
- **Sidelying 90/90 Half rolling**
This is a 90/90 on your side with a foam roller in between
move back and forth for 1-2 minutes without twisting lower back
2+ sets x 1-2 min/side
- **Latimus Stretch**
Hold door frame with side you are stretching, same side leg drops back,
round your back, twist gently
3+ sets x 8 breaths per side
- **90/90 Hip Lift with active ball squeeze**
With legs elevated, used the couch, scoop tailbone off the floor by
pulling on hamstrings, once there, squeeze with a decent amount of force
3+ sets x 8 breaths
| Date | Phase | Total Sets | Notes |
|------------------+-------+------------+--------------------------------------------------------------------------|
| <2024-04-20 Sat> | 1 | 1 | Overall not to bad, had to play with the lat stretch to feel it properly |
| <2024-04-21 Sun> | 1 | 1 | Still playing around with Lat and hip lifts, but felt pretty easy |
| <2024-04-22 Mon> | 1 | 0 | Rest day |
| <2024-04-30 Tue> | 1 | 1 | |
| <2024-05-01 Wed> | 1 | - | |
| <2024-05-02 Thu> | 1 | 0 | Rest day |
| <2024-05-03 Fri> | 1 | - | |
| <2024-05-04 Sat> | 1 | - | |
| <2024-05-05 Sun> | 1 | - | |
| <2024-05-06 Mon> | 1 | - | Rest day |
| <2024-05-07 Tue> | 1 | - | |
| <2024-05-08 Wed> | 1 | - | |
| <2024-05-09 Thu> | 1 | - | Rest day |
| <2024-05-10 Fri> | 1 | - | |

View File

@@ -0,0 +1,37 @@
#+TITLE: Performance-Aware Programming
#+AUTHOR: Casey Muratori
* Definition
Just knowing what you're doing can affect performance
Not doing actual low level performance optimization for specific hardware
"If you understand CSS you should understand this"
* Thinking about the CPU
If you think of a processor as a box which takes instructions as inputs
and then they do some work before producing the output, you have two
levers to pull for performance.
1. Reduce the # of instructions
Simplify the program or generally reduce the work that the CPU needs
2. Speed of the Instruction
Change the set of instructions you're passing through the CPU based
on how much time it might take to process.
* Prologue
In a simple python program below: We achieve ~0.006 adds/cycle
#+begin_src python
def add(a, b):
a + b
c = 1234 + 5678
#+end_src
In the naive C version we're looking at ~0.8 adds/cycle
If we get smarter and do some optimization with SIMD we can achive up to 16 adds/cycle.

View File

@@ -0,0 +1,54 @@
#+TITLE: Common Lisp
#+AUTHOR: Adam Mohammed
* Blogs
- [[https://malisper.me][malisper.me]]
* Debugging
Add this to set SBCL to have debug mode enabled.
#+begin_src lisp
CL-USER> (declaim (optimize (debug 3)))
NIL
#+end_src
This is broken because of the divide by zero:
#+begin_src lisp
(defun fib (n)
(if (<= 0 n 1)
(/ 1 0)
(+ (fib (- n 1))
(fib (- n 2)))))
#+end_src
Running the above puts us in the debugger once we hit the base case, but we can edit the
function definition by adding =(break)= and then press ~r~ on the frame we wish to restart.
Once we fix the code we can restart stepping and the issue can be fixed live!
#+begin_src lisp
(defun fib (n)
(break)
(if (<= 0 n 1)
(/ 1 0)
(+ (fib (- n 1))
(fib (- n 2)))))
#+end_src
You can toggle =C-c M-t= (slime trace dialog) on a funcion and then invoke it and then view the results with =C-c T=.
=update-instance-for-redefined-class= is handy for defining migration behavior when you need to redefine a class.
Restarts can be a handy tool where throw/catch would normally be used. Restarts allow for a user-defined failure
functionality to be selected while still maintaining control in the function which caused the error.
+ References:
- [[https://malisper.me/debugging-lisp-part-1-recompilation/][Recompilation]]
- [[https://malisper.me/debugging-lisp-part-2-inspecting/][Inspecting]]
- [[https://malisper.me/debugging-lisp-part-3-redefining-classes/][Redefining Classes]]
- [[https://malisper.me/debugging-lisp-part-4-restarts/][Restarts]]
- [[https://malisper.me/debugging-lisp-part-5-miscellaneous/][tricks]]
* Libraries to look into
- [[https://github.com/mmontone/ten/blob/master][Ten]] - templating library
- [[https://scymtym.github.io/esrap/][Esrap]] - Parse backwards

View File

@@ -1,28 +1,7 @@
* Tasks
** TODO Put together POC for micro-caching RAILS
** DONE Meeting with DevRel to talk about Provisioning Failures
Chris:
Cluster api - failed provision
it shows up with a 403 - moving the project to a new project
if the device is not ready handling
- Create a Test Plan for migrating VLANs
there was some effort in the pass
jordan
should clients be polling events
if it appears in my devices list
pxe boot can time out
Phoning home
wouldn't want to see it
check on rescue and reinstall operations
** TODO Create a ticket to deal with 403s for provisioning failures
* Fun Tasks
- Set up BSD nodes running OSPF
- Take a look at yggdrassil
- Take a look at openziti

View File

@@ -253,3 +253,296 @@ Results [[file:capacity_levels_pricing.csv][capacity_levels_pricing.csv]]
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Figure out /organizations caching
:PROPERTIES:
:ARCHIVE_TIME: 2023-06-06 Tue 16:34
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
So the most called version of the =/organizations= endpoint is the one thats called with the query params =?per_page=100=. Normally we're spending 200-300ms just getting the data to render the view.
At first I thought that includes and excludes would be the culprits here, but it doesn't seem so.
The view itself has many calls to `exlucdable_include_related` which is a construct that loads related values from the organization unless explicitly excluded.
This means that we're making several round trips to the DB to fetch data that is almost always in the view.
The best bang for our buck here is to parse the includes and excludes before we get to the view, and eager load as much as we can so that we save DB trips.
* DONE Meeting with DevRel to talk about Provisioning Failures
:PROPERTIES:
:ARCHIVE_TIME: 2023-06-06 Tue 16:34
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
Chris:
Cluster api - failed provision
it shows up with a 403 - moving the project to a new project
if the device is not ready handling
there was some effort in the pass
jordan
should clients be polling events
if it appears in my devices list
pxe boot can time out
Phoning home
wouldn't want to see it
check on rescue and reinstall operations
* DONE Get PR for Atlas PR merged for k8s-nautilus-resource-owner
:PROPERTIES:
:ARCHIVE_TIME: 2023-06-07 Wed 22:19
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Figure out why api-internal is stuck
:PROPERTIES:
:ARCHIVE_TIME: 2023-07-04 Tue 13:04
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
problem: Still in INIT after 11m
node: Successfully assigned api/api-internal-697b64c8b7-vfwrh to prod-ny5-core-09
api-internal was missing NATS configuration, db:seed was triggering NATS events
* TODO Keep going with Infratographer events
:PROPERTIES:
:ARCHIVE_TIME: 2023-07-04 Tue 13:08
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: TODO
:END:
- The papertrail gem lets you know what changed
`o.versions.first.object_changes` yields the last set of saved changes
- Need to whitelist which fields are shareable
- Plan, take the raw change set
- Reduce changeset to whitelisted fields
- Emit that change set to infratographer.
- Try to figure out what the event structure for a membership being added means.
Revision: We ditched the changes and just slapped the object IDs
in the event.
This is actually a good move. By just generating event by the ID we
reduce the chance that consumers depend on the state of an object
instead of just the ID.
* DONE Keep going with Infratographer events
:PROPERTIES:
:ARCHIVE_TIME: 2023-07-04 Tue 13:09
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
- The papertrail gem lets you know what changed
`o.versions.first.object_changes` yields the last set of saved changes
- Need to whitelist which fields are shareable
- Plan, take the raw change set
- Reduce changeset to whitelisted fields
- Emit that change set to infratographer.
- Try to figure out what the event structure for a membership being added means.
Revision: We ditched the changes and just slapped the object IDs
in the event.
This is actually a good move. By just generating event by the ID we
reduce the chance that consumers depend on the state of an object
instead of just the ID.
* DONE Express concern around engineering quality
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
I raised concerns that this result is doesn't meet the bar to achieve success on the problem we set out to solve.
I also do believe it doesn't help solve the problem
When I express that the immediate response was defense. That's the wrong response itself.
I can be clearer -- There are two points I want here,
1. I don't think this solves the problem we were targeting.
2. We have finite time, deciding if we want to iterate or redesign based on learning is a decision to make.
3. Option 3 which I didn't think was possible is to completely ignore the point.
Here's what's worse though. We're trying to transform the team to be enablers for other teams, so we need to set the bar.
We have people on this team that are trying to raise / set the bar, and when they raise concerns they are dismissed without
consideration.
My main concern is our ability to field feedback on this team. I don't
think I have many occurrences of differing opinions being given the required space.
At best it leads to apathy, at worst it leads to mediocrity. Either way it leads to a dysfunctional team by design.
Those people trying to raise the bar aren't doing so selfishly.
We don't get immediate gain out of performing better or working harder, we still get paid the same.
So why would we be striving to raise the bar? This is a fundamental question to see if a leader understands
what high performers need to thrive.
* DONE Try to deploy
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Write ExternalSecretPush for DB creds and Secret key base
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Present Users through the ResourceOwnerShim
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* TODO Put together POC for micro-caching RAILS
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: TODO
:END:
* TODO Create a ticket to deal with 403s for provisioning failures
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: TODO
:END:
* TODO MAKE TICKETS
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 10:36
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: TODO
:END:
scheduler
worker configuration
cleanup job
* DONE Start Chuck Roast Braise
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 11:21
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Create Tickets for Scheduler
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-16 Wed 11:21
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Resource Owner shim returns deleted users and memberships
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-28 Mon 11:12
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Add defaults and non-nullable bits to DB
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-28 Mon 11:12
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Update queued at timestamp
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-28 Mon 11:12
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Start Work on Scheduler
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-28 Mon 11:12
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Review Miles LBaaS doc
:PROPERTIES:
:ARCHIVE_TIME: 2023-08-28 Mon 11:13
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Resource Owner Shim doesn't handle JWT errors properly
:PROPERTIES:
:ARCHIVE_TIME: 2024-07-30 Tue 08:51
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Resource Owner shim should check permissions for user
:PROPERTIES:
:ARCHIVE_TIME: 2024-07-30 Tue 08:52
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:

View File

@@ -2,7 +2,8 @@
#+AUTHOR: Adam Mohammed
#+DATE: [2023-04-17 Mon]
* First note
* Notes
Today I want to start using org notes to keep things better organized
and to be able to keep focus between days more consistent.
This is going to be a learning process, but I need a better way of taking and storing notes.
Eventually, I hope to be able to refer to this more and see a recap of what happened in a quarter.
Or even longer.

1481
salvage_license_costs.org Normal file

File diff suppressed because it is too large Load Diff

0
standup/identity.org Normal file
View File