Notes 6/7/23

This commit is contained in:
2023-06-07 22:20:32 -04:00
parent b1e8dea50e
commit 47dc5e09ad
3 changed files with 93 additions and 97 deletions

49
common-lisp.org Normal file
View File

@@ -0,0 +1,49 @@
#+TITLE: Common Lisp
#+AUTHOR: Adam Mohammed
* Blogs
- [[https://malisper.me][malisper.me]]
* Debugging
Add this to set SBCL to have debug mode enabled.
#+begin_src lisp
CL-USER> (declaim (optimize (debug 3)))
NIL
#+end_src
This is broken because of the divide by zero:
#+begin_src lisp
(defun fib (n)
(if (<= 0 n 1)
(/ 1 0)
(+ (fib (- n 1))
(fib (- n 2)))))
#+end_src
Running the above puts us in the debugger once we hit the base case, but we can edit the
function definition by adding =(break)= and then press ~r~ on the frame we wish to restart.
Once we fix the code we can restart stepping and the issue can be fixed live!
#+begin_src lisp
(defun fib (n)
(break)
(if (<= 0 n 1)
(/ 1 0)
(+ (fib (- n 1))
(fib (- n 2)))))
#+end_src
You can toggle =C-c M-t= (slime trace dialog) on a funcion and then invoke it and then view the results with =C-c T=.
=update-instance-for-redefined-class= is handy for defining migration behavior when you need to redefine a class.
Restarts can be a handy tool where throw/catch would normally be used. Restarts allow for a user-defined failure
functionality to be selected while still maintaining control in the function which caused the error.
+ References:
- [[https://malisper.me/debugging-lisp-part-1-recompilation/][Recompilation]]
- [[https://malisper.me/debugging-lisp-part-2-inspecting/][Inspecting]]
- [[https://malisper.me/debugging-lisp-part-3-redefining-classes/][Redefining Classes]]
- [[https://malisper.me/debugging-lisp-part-4-restarts/][Restarts]]
- [[https://malisper.me/debugging-lisp-part-5-miscellaneous/][tricks]]

View File

@@ -1,29 +1,5 @@
* Tasks * Tasks
** TODO Write ExternalSecretPush for DB creds and Secret key base
** TODO Try to deploy
** TODO Put together POC for micro-caching RAILS ** TODO Put together POC for micro-caching RAILS
** DONE Meeting with DevRel to talk about Provisioning Failures
Chris:
Cluster api - failed provision
it shows up with a 403 - moving the project to a new project
if the device is not ready handling
there was some effort in the pass
jordan
should clients be polling events
if it appears in my devices list
pxe boot can time out
Phoning home
wouldn't want to see it
check on rescue and reinstall operations
** TODO Create a ticket to deal with 403s for provisioning failures ** TODO Create a ticket to deal with 403s for provisioning failures
** TODO Talk to Laurence about self service reservations

View File

@@ -254,92 +254,63 @@ Results [[file:capacity_levels_pricing.csv][capacity_levels_pricing.csv]]
:ARCHIVE_TODO: DONE :ARCHIVE_TODO: DONE
:END: :END:
* DONE Talk about interfaces OU in graphql * DONE Figure out /organizations caching
:PROPERTIES: :PROPERTIES:
:ARCHIVE_TIME: 2023-05-19 Fri 13:27 :ARCHIVE_TIME: 2023-06-06 Tue 16:34
:ARCHIVE_FILE: ~/notes/org-notes/notes.org :ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks :ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes :ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE :ARCHIVE_TODO: DONE
:END: :END:
- when you define a io, that subgraph can't implement that interface So the most called version of the =/organizations= endpoint is the one thats called with the query params =?per_page=100=. Normally we're spending 200-300ms just getting the data to render the view.
- create EM project type, that also implements Resource owner
- it can have a EM organization, and that org can have the ID
- opts:
- Provides graph for projects and orgs
- Or
At first I thought that includes and excludes would be the culprits here, but it doesn't seem so.
Problem statement: The view itself has many calls to `exlucdable_include_related` which is a construct that loads related values from the organization unless explicitly excluded.
We need a way to expose metal orgs/projects to the infratographer This means that we're making several round trips to the DB to fetch data that is almost always in the view.
supergraph, because graph queries will need to resolve Metal project
IDs as part of their request.
Discussion The best bang for our buck here is to parse the includes and excludes before we get to the view, and eager load as much as we can so that we save DB trips.
We originally had planned that we'd replicate the state of * DONE Meeting with DevRel to talk about Provisioning Failures
orgs/projects up to the tenant-api, but after the graph changes we may
not want to do that. This is a use case where we may want to implement
our own graph API which serves up a Resource Owner.
Currently tenant-api would expose a resource that impelements the
Resource Owner interface. Instead of replicating the data up to the
tenant-api, we can instead expose a graph API from the data within the
monolith.
Doing this means we don't have to lift the data out of the monolith
directly, but instead just present it in a way that fits with the
infratographer expectations.
We would still need to emit events about operations which modify
organizations/projects because those events are consumed by other
services in the ecosystem to update their internal states.
Options:
1. Serve data from a shim service while reading from a replica of the
monolith DB
2. Serve data from a shim service with it's own data store
3. Serve data directly from the monolith with a graph interface
4. Migrate Org/Project data completely out of the monolith and expose
an interface.
Option 4 is the heaviest option, and may be the ultimate goal, but it
comes with high risk and is not quick to implement since every feature
relates back to organizations and projects on some level.
Option 3 would be much simpler, but doesn't move toward the goal of
carving out functionality of the monolith
Option 2 starts to split out data, but still requires us to lift data
out of the monolith and keep it consistent for the time being
Option 1,creates a dedicated service for the infratographer graph
integration, which achieves the goal of presenting an interface to
satisfy the graphQL interfaces, while still giving us a stepping stone
to start carving data out of the monolith.
** References:
[[https://gist.github.com/jnschaeffer/1217a7a597f7cdab0c91493a994ed615#file-tenancy-org][tenancy gist]]
* DONE Discuss tenantID vs projectID with John
:PROPERTIES: :PROPERTIES:
:ARCHIVE_TIME: 2023-05-19 Fri 13:27 :ARCHIVE_TIME: 2023-06-06 Tue 16:34
:ARCHIVE_FILE: ~/notes/org-notes/notes.org :ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks :ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes :ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE :ARCHIVE_TODO: DONE
:END: :END:
Chris:
Cluster api - failed provision
it shows up with a 403 - moving the project to a new project
if the device is not ready handling
GraphQL - implementation specific
GraphQL there was some effort in the pass
- resolvers for every type jordan
- apollo supergraph does the mapping from resource type to service
- interface - set of required fields should clients be polling events
- __type determines resource type
- nanoid thing ent node if it appears in my devices list
pxe boot can time out
Phoning home
wouldn't want to see it
check on rescue and reinstall operations
* DONE Get PR for Atlas PR merged for k8s-nautilus-resource-owner
:PROPERTIES:
:ARCHIVE_TIME: 2023-06-07 Wed 22:19
:ARCHIVE_FILE: ~/org-notes/notes.org
:ARCHIVE_OLPATH: Tasks
:ARCHIVE_CATEGORY: notes
:ARCHIVE_TODO: DONE
:END: