Files
org-notes/notes.org_archive
Adam Mohammed b1e8dea50e more notes
2023-06-01 16:12:31 -04:00

346 lines
9.8 KiB
Org Mode

# -*- mode: org -*-
Archived entries from file /home/adam/notes/org-notes/notes.org
* DONE Interview with Roopa Bose
:PROPERTIES:
:ARCHIVE_TIME: 2023-04-19 Wed 10:55
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
- 7 Years Experience
- Go GRPC
- Docker
- DB
- JS
** DONE Roopa Interview
How did you reduce deplyoment time?
How did you optimize for dependency?
What wentinto reducing latency with API calls?
Just moved to canada
6 years java
2 yrs golang
transition from java to golang
tutorials
had to build a feature - used this as practical experience
startup experience
Got into angular as part of startup
EC2 and S3
Interested in Backends'
Difference bw authentication
* are you you
* do you have permissions
Describe the structure of a JWT
token expiration
session managments
user information is encrypted
container
envelopes environment
containers
prod issue was reported by customer
missed a test case?
did not have a proper testing tool?
testing strategy?
QA team and developer?
what langauges are you most comfortable with?
leet code done in java
returned cache result
rotate secrets
-- found out from logs
diagnosing the issues
service had exceeded timeout.
marques Q:
** DONE Give Feedback on Roopa
Roopas didn't seem like the strongest candidate to me.
During the short-form questions, she understood conceptually the JWT
and authentication vs authorization. She hasn't worked with K8s. With
a little guidance she did seem to understand what a container was. At
first the explanation was a "virtualenv".
My read on the answers were that the information she communicated was
decent surface level knowledge but when drilling down a bit further,
the understanding broke down quickly.
With respect to the long-form questions, she did much better. I liked
her answers for how she had improved the microservices in her previous
job. She had implemented a caching layer to keep data local within
their services as opposed to always hitting Salesforce. I would say
that her process for identitifying the issue was a bit lacking, but
she did provide results, so it worked out. The strategy to diagnose
the problem was more of seeing errors in logs and then looking at the
fix, but given that they didn't have much in terms of o11y, I think
that's the best she could have done.
She hadn't really known what we did, and wasn't able to make it very
far in guessing how provisioning would work. Given that she hasn't
worked in the space, I would expect her to think about what could
possibly go into this, but she didn't get far without Sarah leading
her through it.
Overall, I think that she's not fit for the senior position.
* DONE Add logging to RAILS for affected 1564 users
:PROPERTIES:
:ARCHIVE_TIME: 2023-04-19 Wed 10:55
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Figure out what users are affected by =#_incident-1564=
:PROPERTIES:
:ARCHIVE_TIME: 2023-04-24 Mon 15:01
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
[2023-04-19 Wed]
98b54728-ca71-47f8-8a86-5326dfa94b68
969eb393-ddfe-4db8-83e5-a4f6a4d1cae8
96293db0-1756-4a2e-97dc-3ed8051c76c3
f31ca1b8-35c9-43b1-8abd-205157da57ab
cc817f6e-f56f-4cae-91f2-eb1a85049847
** DONE GET FeatureFlag =inc-1564= created
** DONE TURN on FeatureFlag for the affected users
* DONE Figure out how API caching works
:PROPERTIES:
:ARCHIVE_TIME: 2023-04-24 Mon 15:01
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
[2023-04-21 Fri]
[[file:~/repos/packet_api/app]]
#+begin_src ruby
{"api-memcached:11211"=>
{"pid"=>"1",
"uptime"=>"14588595",
"time"=>"1682087387",
"version"=>"1.4.39",
"libevent"=>"2.0.21-stable",
"pointer_size"=>"64",
"rusage_user"=>"55637.149812",
"rusage_system"=>"312432.516705",
"curr_connections"=>"178",
"total_connections"=>"6312162",
"connection_structures"=>"233",
"reserved_fds"=>"20",
"cmd_get"=>"66235052615",
"cmd_set"=>"2891259574",
"cmd_flush"=>"0",
"cmd_touch"=>"0",
"get_hits"=>"66044041688",
"get_misses"=>"191010927",
"get_expired"=>"7840267",
"get_flushed"=>"0",
"delete_misses"=>"151488684",
"delete_hits"=>"4971710",
"incr_misses"=>"0",
"incr_hits"=>"0",
"decr_misses"=>"0",
"decr_hits"=>"0",
"cas_misses"=>"0",
"cas_hits"=>"602205",
"cas_badval"=>"35181",
"touch_hits"=>"0",
"touch_misses"=>"0",
"auth_cmds"=>"0",
"auth_errors"=>"0",
"bytes_read"=>"11288707288755",
"bytes_written"=>"91579773904379",
"limit_maxbytes"=>"67108864",
"accepting_conns"=>"1",
"listen_disabled_num"=>"0",
"time_in_listen_disabled_us"=>"0",
"threads"=>"4",
"conn_yields"=>"2570603764",
"hash_power_level"=>"16",
"hash_bytes"=>"524288",
"hash_is_expanding"=>"0",
"malloc_fails"=>"0",
"log_worker_dropped"=>"0",
"log_worker_written"=>"0",
"log_watcher_skipped"=>"0",
"log_watcher_sent"=>"0",
"bytes"=>"66018118",
"curr_items"=>"44567",
"total_items"=>"2582414728",
"expired_unfetched"=>"5440550",
"evicted_unfetched"=>"139326914",
"evictions"=>"161345638",
"reclaimed"=>"6384399",
"crawler_reclaimed"=>"0",
"crawler_items_checked"=>"0",
"lrutail_reflocked"=>"48"}
}
#+end_src
Found out that you can set `-vv` on memcached to get the commands dumped out
* DONE look at VMC-E SDDC
:PROPERTIES:
:ARCHIVE_TIME: 2023-04-24 Mon 15:02
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Audit Spot Market Bids
:PROPERTIES:
:ARCHIVE_TIME: 2023-05-10 Wed 16:03
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
#+begin_src sql :name max_bids per facility
SELECT p.slug, array_agg(f.code), array_agg(cl.max_allowed_bid)
FROM capacity_levels cl
JOIN plans p ON cl.plan_id = p.id
JOIN facilities f ON cl.facility_id = f.id
JOIN metros m ON f.metro_id = m.id
GROUP BY p.slug
ORDER BY p.slug ASC;
#+end_src
#+begin_src sql :name checking for distinct prices
SELECT cl.plan_id, cl.max_allowed_bid, COUNT(DISTINCT cl.max_allowed_bid)
FROM capacity_levels cl
WHERE cl.deleted_at < 'January 1, 1970'
GROUP BY plan_id, max_allowed_bid;
#+end_src
Results [[file:capacity_levels_pricing.csv][capacity_levels_pricing.csv]]
* DONE Upgrade CRDB to 22.2.7
:PROPERTIES:
:ARCHIVE_TIME: 2023-05-10 Wed 16:03
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
* DONE Talk about interfaces OU in graphql
:PROPERTIES:
:ARCHIVE_TIME: 2023-05-19 Fri 13:27
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
- when you define a io, that subgraph can't implement that interface
- create EM project type, that also implements Resource owner
- it can have a EM organization, and that org can have the ID
- opts:
- Provides graph for projects and orgs
- Or
Problem statement:
We need a way to expose metal orgs/projects to the infratographer
supergraph, because graph queries will need to resolve Metal project
IDs as part of their request.
Discussion
We originally had planned that we'd replicate the state of
orgs/projects up to the tenant-api, but after the graph changes we may
not want to do that. This is a use case where we may want to implement
our own graph API which serves up a Resource Owner.
Currently tenant-api would expose a resource that impelements the
Resource Owner interface. Instead of replicating the data up to the
tenant-api, we can instead expose a graph API from the data within the
monolith.
Doing this means we don't have to lift the data out of the monolith
directly, but instead just present it in a way that fits with the
infratographer expectations.
We would still need to emit events about operations which modify
organizations/projects because those events are consumed by other
services in the ecosystem to update their internal states.
Options:
1. Serve data from a shim service while reading from a replica of the
monolith DB
2. Serve data from a shim service with it's own data store
3. Serve data directly from the monolith with a graph interface
4. Migrate Org/Project data completely out of the monolith and expose
an interface.
Option 4 is the heaviest option, and may be the ultimate goal, but it
comes with high risk and is not quick to implement since every feature
relates back to organizations and projects on some level.
Option 3 would be much simpler, but doesn't move toward the goal of
carving out functionality of the monolith
Option 2 starts to split out data, but still requires us to lift data
out of the monolith and keep it consistent for the time being
Option 1,creates a dedicated service for the infratographer graph
integration, which achieves the goal of presenting an interface to
satisfy the graphQL interfaces, while still giving us a stepping stone
to start carving data out of the monolith.
** References:
[[https://gist.github.com/jnschaeffer/1217a7a597f7cdab0c91493a994ed615#file-tenancy-org][tenancy gist]]
* DONE Discuss tenantID vs projectID with John
:PROPERTIES:
:ARCHIVE_TIME: 2023-05-19 Fri 13:27
:ARCHIVE_FILE: ~/notes/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END:
GraphQL - implementation specific
GraphQL
- resolvers for every type
- apollo supergraph does the mapping from resource type to service
- interface - set of required fields
- __type determines resource type
- nanoid thing ent node