summaryrefslogtreecommitdiff
path: root/.trash/daily
diff options
context:
space:
mode:
authorJasper Ras <jaspert.ras@gmail.com>2025-07-15 20:34:14 +0200
committerJasper Ras <jaspert.ras@gmail.com>2025-07-15 20:34:14 +0200
commit62483ddbc85da140b36eee2fa6bc43e7093eb3ad (patch)
tree33550a06020ec71e2777a6241447a40ba98c10ca /.trash/daily
parent04db4c941799bfbfac666160e7b4298716649a7f (diff)
parenta9886bf2f8a35369a2c42070c5f83504dfab2bc5 (diff)
vault backup: 2025-07-15 20:34:14
Diffstat (limited to '.trash/daily')
-rw-r--r--.trash/daily/04-Jun-2025.md6
-rw-r--r--.trash/daily/06-May-2025.md31
-rw-r--r--.trash/daily/07-May-2025.md22
-rw-r--r--.trash/daily/09-May-2025.md14
-rw-r--r--.trash/daily/10-Jun-2025.md20
-rw-r--r--.trash/daily/11-Jun-2025.md8
-rw-r--r--.trash/daily/12-May-2025.md11
-rw-r--r--.trash/daily/17-May-2025.md3
-rw-r--r--.trash/daily/18-Jun-2025.md2
-rw-r--r--.trash/daily/18-May-2025.md7
-rw-r--r--.trash/daily/19-May-2025.md3
-rw-r--r--.trash/daily/22-Jun-2025.md4
-rw-r--r--.trash/daily/23-Jun-2025.md0
-rw-r--r--.trash/daily/25-Jun-2025.md3
-rw-r--r--.trash/daily/27-May-2025.md6
-rw-r--r--.trash/daily/archive/01-May-2025.md8
-rw-r--r--.trash/daily/archive/10-Apr-2025.md20
-rw-r--r--.trash/daily/archive/11-Apr-2025.md29
-rw-r--r--.trash/daily/archive/14-Apr-2025.md24
-rw-r--r--.trash/daily/archive/15-Apr-2025.md30
-rw-r--r--.trash/daily/archive/16-Apr-2025.md17
-rw-r--r--.trash/daily/archive/17-Apr-2025.md4
-rw-r--r--.trash/daily/archive/18-Apr-2025.md12
-rw-r--r--.trash/daily/archive/22-Apr-2025.md34
-rw-r--r--.trash/daily/archive/23-Apr-2025.md69
-rw-r--r--.trash/daily/archive/24-Apr-2025.md9
-rw-r--r--.trash/daily/archive/25-Apr-2025.md19
27 files changed, 415 insertions, 0 deletions
diff --git a/.trash/daily/04-Jun-2025.md b/.trash/daily/04-Jun-2025.md
new file mode 100644
index 0000000..49d99df
--- /dev/null
+++ b/.trash/daily/04-Jun-2025.md
@@ -0,0 +1,6 @@
+[[Daily]]
+
+**Occurrences**
+Trying to install goba/gobs in the global venv of devstack. Had to relax the requirements of dependencies; thus dependencies being newer than tested with. This is suboptimal because now our environment is different from our prod..
+
+In a more recent version of sqlalchemy they mask the password in engine.url by default (opposite to not masking it before): this caused a connection error during db sync. I remembered I ran into this before when looking at nova's db sync code and saw the `engine.url.render_as_string(hide_password=False)`. Updated our code with this fixed the issue \ No newline at end of file
diff --git a/.trash/daily/06-May-2025.md b/.trash/daily/06-May-2025.md
new file mode 100644
index 0000000..a560729
--- /dev/null
+++ b/.trash/daily/06-May-2025.md
@@ -0,0 +1,31 @@
+[[Daily]]
+
+Checklist for maintenance OVN upgrade- [ ] Make
+- [x] xping all public router ips
+- [x] Open Grafana dashboard for OVN leaders
+- [x] Open Grafana dashboard for network metrics
+- [x] xping one vm on every compute node
+- [x] check ovn cluster status
+- [x] make sure ansible inventory covers all compute nodes
+- [x] make sure ansible inventory covers all network nodes
+- [x] make sure ansible inventory covers all ovn cluster database nodes
+- [x] Make sure network nodes are reboot proof
+- [x] Check ansible netconf for reboot proofness as well
+- [x] Check puppet status on all network nodes
+
+
+Xping: window 3
+Ansible playbook: window 4
+OVN db cluster nodes: window 4
+Neutron server tail: window 5
+
+
+Note
+- Moet ff iets slimmers om te xpingen. Freenet veelste veel pub router ips; liefst gwn 1 of ip per netwerk en compute node.
+ - Query OVN?
+- Ff proberen vast te leggen hoe we OVN databases automatisch kunnen checken op compleet up zijn, ipv arbitrair wachten en zelf kijken.
+- OVN upgrade 1: ongeveer om 2 uur begonnen, dingen recoveren volledig vanaf mijn POV: 2:09
+ - OVN db upgrade stap wacht: check -> lijkt allemaal goed, cluster status OK
+ - eerste ovn-controller upgrade (n01) -> shit blijft down hoe lang ik ook lijk te wachten: br-int connection timeout
+ - Besloten om toch gewoon proberen door te duwen
+ - Zodra ik continue met de volgende lijkt eigenlijk meteen br-int te connecten (toeval?) en gaan dingen recoveren \ No newline at end of file
diff --git a/.trash/daily/07-May-2025.md b/.trash/daily/07-May-2025.md
new file mode 100644
index 0000000..0070db5
--- /dev/null
+++ b/.trash/daily/07-May-2025.md
@@ -0,0 +1,22 @@
+[[Daily]]
+
+Werk aan ZFS backup goba met Mohammed. Mohammed was paar dagen sidetracked met devstack opzetten. Had eigenlijk gehoopt dat hij met een plan van aanpak betreffende implementatie zou komen (daar had ik op gehint vrijdag (of donderdag al?)).. Nu afgesproken dat hij dit gaat maken. Om 15:00 doen we een meet om het door te spreken.
+
+Rutger hint ernaar dat wij een business service gaan opzetten die producten voor de group op openstack oplevert.
+# Non-functional requirements:
+Technical constraints:
+- Python
+- Connexion
+- Venv deployed (no apt for dependency hell)
+- End to end tests
+- Unit tests
+
+Requirements:
+* Retrieving data should always be fast, correctness is less important
+* Creating new resources should always be validated properly, doesn't have to be fast as long as its correct.
+* Reliable
+ * Creating or updating resources should be atomic
+# Functional requirements:
+- Authentication: the system requires users to be authenticated before being able to use its APIs.
+- Eventually consistent: it must not be that if a requested resource fails to create that another request has to be made. The creation of a resource is guaranteed.
+- Validation: a request must be strictly validated in order to ensure correctness, as soon as a request passes validation we must ensure it is fulfilled. \ No newline at end of file
diff --git a/.trash/daily/09-May-2025.md b/.trash/daily/09-May-2025.md
new file mode 100644
index 0000000..0fba0ef
--- /dev/null
+++ b/.trash/daily/09-May-2025.md
@@ -0,0 +1,14 @@
+[[Daily]]
+
+
+Again diskspace issues on the testpod. notifications.sample queue was very large (24G). Threw away huge directory containing quorum Q data.
+Threw away all mysql backups.
+Rebooted lxchost1 tesptod.
+Noticed ceilometer also had huge logs.
+Purged notifications.info queue as I saw alot of spam in ceilometer-agent-notification.log on ceilometer node about duplicate messages.
+Also noticed unread messages in cinder-volume queue.
+It shows INTERNAL_ERRORs and failure to connect with rabbitmq.
+Stopped the cinder-volume service.
+Purged cinder-volume queue.
+Started ceilometer-agent-notification
+Right away seeing the same INTERNAL_ERROR messages as on cinder before. \ No newline at end of file
diff --git a/.trash/daily/10-Jun-2025.md b/.trash/daily/10-Jun-2025.md
new file mode 100644
index 0000000..e48f782
--- /dev/null
+++ b/.trash/daily/10-Jun-2025.md
@@ -0,0 +1,20 @@
+[[Daily]]
+
+
+Performance review
+
+Beter tijdsinschatting geven.
+
+Work environment
+Feedback: nicest person
+
+quality of work
+
+beter estimaten
+
+Doel: LPIC.
+
+
+---
+
+OVN loadbalancer health check. OVN nb: `nbctl show load_balancer_health_check` \ No newline at end of file
diff --git a/.trash/daily/11-Jun-2025.md b/.trash/daily/11-Jun-2025.md
new file mode 100644
index 0000000..edb7131
--- /dev/null
+++ b/.trash/daily/11-Jun-2025.md
@@ -0,0 +1,8 @@
+[[Daily]]
+
+
+access to openstack, marcus ripkens.
+marcus: cloud-init scripts to setup vms, loadbalancers, 2 dedicated personen, dns migration (yves geen vertrouwen axfr, geen insight in wat wel/niet goed synced is)
+bare metal -> 10 vm's, lb needs update: domain -> new ip.
+500K+ customers.
+Yves overtuigen dat MHC team de proxies moet beheren indien permanent oplossing./ \ No newline at end of file
diff --git a/.trash/daily/12-May-2025.md b/.trash/daily/12-May-2025.md
new file mode 100644
index 0000000..c6880ab
--- /dev/null
+++ b/.trash/daily/12-May-2025.md
@@ -0,0 +1,11 @@
+[[Daily]]
+
+
+Webglobe migration strategy
+
+Minimum assistance from us?
+
+They have a script that incrementally moves data.
+Existing to proxmox, modified to move it to virtually anything.
+
+Ansible, create infra.
diff --git a/.trash/daily/17-May-2025.md b/.trash/daily/17-May-2025.md
new file mode 100644
index 0000000..e728ec6
--- /dev/null
+++ b/.trash/daily/17-May-2025.md
@@ -0,0 +1,3 @@
+[[Daily]]
+
+# Notes on go
diff --git a/.trash/daily/18-Jun-2025.md b/.trash/daily/18-Jun-2025.md
new file mode 100644
index 0000000..2e8bd2c
--- /dev/null
+++ b/.trash/daily/18-Jun-2025.md
@@ -0,0 +1,2 @@
+[[Daily]]
+
diff --git a/.trash/daily/18-May-2025.md b/.trash/daily/18-May-2025.md
new file mode 100644
index 0000000..3fa1789
--- /dev/null
+++ b/.trash/daily/18-May-2025.md
@@ -0,0 +1,7 @@
+[[Daily]]
+
+Some programming problems or programs to make for practicing:
+- Echo stdin
+- Find duplicate lines on stdin or files passed as arguments. Operating on a stream, or slurp all input and do it at once.
+
+# Further notes on Go
diff --git a/.trash/daily/19-May-2025.md b/.trash/daily/19-May-2025.md
new file mode 100644
index 0000000..9135451
--- /dev/null
+++ b/.trash/daily/19-May-2025.md
@@ -0,0 +1,3 @@
+[[Daily]]
+
+# Another note on Go
diff --git a/.trash/daily/22-Jun-2025.md b/.trash/daily/22-Jun-2025.md
new file mode 100644
index 0000000..a34f3c4
--- /dev/null
+++ b/.trash/daily/22-Jun-2025.md
@@ -0,0 +1,4 @@
+[[Daily]]
+## Wedding Vows
+
+
diff --git a/.trash/daily/23-Jun-2025.md b/.trash/daily/23-Jun-2025.md
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/.trash/daily/23-Jun-2025.md
diff --git a/.trash/daily/25-Jun-2025.md b/.trash/daily/25-Jun-2025.md
new file mode 100644
index 0000000..aaf8ba3
--- /dev/null
+++ b/.trash/daily/25-Jun-2025.md
@@ -0,0 +1,3 @@
+**they don't delete volumes**
+case cant cover:
+redeployment -> reset, crm identifer. \ No newline at end of file
diff --git a/.trash/daily/27-May-2025.md b/.trash/daily/27-May-2025.md
new file mode 100644
index 0000000..5a271be
--- /dev/null
+++ b/.trash/daily/27-May-2025.md
@@ -0,0 +1,6 @@
+[[Daily]]
+
+
+Decided that devenv is too invasive for use at work; I can't commit its .envrc nor .pre-commit-config.yaml. Therefore I'm setting up just regular shell.nix usage. I quickly ran into an issue where the nix shell is using bash and found a tool called Lorri which promises to be useful for developer environments with shell.nix. Let's hope that is less invasive than devenv.
+Thus far it's not very nice, it install stuff on the background ( a systemd daemon that listens for change ) of which you have no idea when its done.
+
diff --git a/.trash/daily/archive/01-May-2025.md b/.trash/daily/archive/01-May-2025.md
new file mode 100644
index 0000000..05bd76c
--- /dev/null
+++ b/.trash/daily/archive/01-May-2025.md
@@ -0,0 +1,8 @@
+[[Daily]]
+
+# Manila
+Share network
+Cinder
+
+**dhss = driver_handles_share_server**
+Means that Manilla creates a share server, requiring a share network and a service image. \ No newline at end of file
diff --git a/.trash/daily/archive/10-Apr-2025.md b/.trash/daily/archive/10-Apr-2025.md
new file mode 100644
index 0000000..71ef3c7
--- /dev/null
+++ b/.trash/daily/archive/10-Apr-2025.md
@@ -0,0 +1,20 @@
+---
+tags:
+ - self
+ - reflection
+---
+[[Daily]]
+
+### I assumed that Mohammed made an oopsie but instead it turned out to be one of us that forgot to clean up.
+Today I found out that on the testpod the user sanoid's ssh keys suddenly belonged to Mohammeds' user. I right away went to his chat with the idea that he did something silly probably and even told him that it made me a bit worried.
+
+I also went to Rutger, who immediately pointed out that it is probably due to us changing the uid of sanoids' user. Which turned out to be true.
+
+I ask myself the following: why am I quick to jump to a conclusion like "Ah, mohammed might've caused damage by accidentally chowning to much or something like that."
+How can I stop myself from that?
+I think one way is to ALWAYS force myself to investigate completely, not right away talking about it with others. Keep it to myself, until I really MUST communicate about it.
+
+
+#### Erik doesn't show up at the office for Carlos even though he said that he would be there on Thursday
+This kind of triggers a feeling that Erik has a bit of a lax attitude, which often was associated with ops back in the day as well.
+He is the designated mentor but I feel he doesn't prepare it well and just "goes with the flow" too much. \ No newline at end of file
diff --git a/.trash/daily/archive/11-Apr-2025.md b/.trash/daily/archive/11-Apr-2025.md
new file mode 100644
index 0000000..5b80457
--- /dev/null
+++ b/.trash/daily/archive/11-Apr-2025.md
@@ -0,0 +1,29 @@
+---
+tags:
+ - weekly
+---
+[[Daily]]
+
+This week:
+- [[10-Apr-2025]]
+
+Today marks another Friday, almost weekend, woohoo.
+
+This week few notable things happened.
+
+First of all I finally have restored a Ceph backed volume successfully! The issue was actually kind of silly, I forgot to close the read end of a pipe, thus it kept blocking. Luckily I found it, and fixing it was rather trivial.
+After that I refactored a bit, to make it little bit better (still not great), and deployed.
+I also fixed the request ID logging that was broken for a while, during refactoring of the agent RPC handler I accidentally moved the ctx.update_store outside of the child thread, so the update was useless. Moving it back into the child thread gave back our precious request ids.
+Oh and I also found the cause of some sporadic mysql "object belongs to a different session" issue in the backup service that was haunting me. This happened due the Unit Of Work being instantiated only during application startup, specifically for the RPC handler, and then every RPC request used that same UOW.
+Because the UOW creates a new session everytime it wasn't completely broken, but occasionally two RPC calls could come in at the same time and then the latter would override the session of the first.
+I fixed this by instantiating a UOW per request, this also happens in the API, and is actually the correct way of using it.
+
+Then I have kicked off the [[List of tags I use in this Vault and their purpose]] note, which contains a list of tags I use within this vault so I don't forget.
+As with the current note i'm writing I added the new "weekly" tag to indicate that this "daily" note is actually a week report which I want to write every friday from now on.
+
+Just had a little brainfart writing the above.. Is it too long? I plan to use these weekly notes to introspect during self assessments, but of course they shouldn't be too tedious to go through... Hmm, well, I guess we'll have to actually **use** it before deciding that.
+
+I think it will be good practice to link to the current weeks' notes in this weekly note as well so.. see the top :) I probably should put more stuff into daily notes, and then make this a bit of a TLDR.
+
+I have also been doing some thinking and note taking about [[TDD]] because I feel kind of bad about the current state of the backup service & agent (no tests). We've been looking at [[OpenStack Tempest]] for a bit which is interesting, but I would also like to just create better and more unit tests, especially during development. [[High Gear Low Gear Testing]] was a phrase from the cosmic python book that particularly seemed to resonate with me, but I haven't yet been able to practice it.
+
diff --git a/.trash/daily/archive/14-Apr-2025.md b/.trash/daily/archive/14-Apr-2025.md
new file mode 100644
index 0000000..998526f
--- /dev/null
+++ b/.trash/daily/archive/14-Apr-2025.md
@@ -0,0 +1,24 @@
+[[Daily]]
+
+Monday!
+
+# Standup
+OnFailure handlers installeren voor rename script ZFS datasets op backup nodes.
+Survey invullen
+Puppetrun failed nalopen
+Uitzoeken waarom de F QEMU GA sporadisch faalt
+- Dit had geloof ik iets te maken met QEMU crash ? Ff dubbel checken, heb er ergens een note van denk ik.
+# QA with Webglobe team
+Q: Virtual buses, volumes, suggested virtio. Can we do iSCSI instead of VirtIO because we do discards?
+A: We don't support discard. NetAPP implements by sending nul bytes. -> increase IO
+^ i wouldn't know this
+
+Q: Can we install from CD?
+R: yes, possible. create image, props, iso boot, boot vm rescue from image.
+J: documented?
+R: Will find dgoc for onehome
+
+Q: IP addresses, do we really need to let OS handle the allocation?
+A: OpenStack does this out of the box; used as single source of tru.
+J: Finds reason acceptable; will rewrite.th
+
diff --git a/.trash/daily/archive/15-Apr-2025.md b/.trash/daily/archive/15-Apr-2025.md
new file mode 100644
index 0000000..8b3a3e4
--- /dev/null
+++ b/.trash/daily/archive/15-Apr-2025.md
@@ -0,0 +1,30 @@
+[[Daily]]
+
+**Interview Ali**
+
+Q:
+Waarom ga je weg bij Leaseplan?
+Over openstack deployment: hoe geautomatiseerd; welke tools?
+
+> Implemented virtual staging clusters mirroring the production architecture using KVM, libvirt, Linux bridge/virtual interfaces, and iptables, reducing setup time by 90% while significantly optimizing costs.
+
+Did you directly integrate to KVM/Libvirt, can you tell a bit about that?
+
+A:
+php/wordpress dev
+exp with cpanel and such
+then switch devops
+
+
++1 kolla ansible / openstack
++1 cpanel/hosting
+-1 geen puppet ervaring
++1 ovn ervaring; meest voorname probleem is met ovs/ovn
+
+beetje rare sidetrack config mgmt ansible vs puppet; maakt punt over consistency
+
+Q: regular day
+Q: expectations
+
+In amsterdam
+Avail: 1 june \ No newline at end of file
diff --git a/.trash/daily/archive/16-Apr-2025.md b/.trash/daily/archive/16-Apr-2025.md
new file mode 100644
index 0000000..591f700
--- /dev/null
+++ b/.trash/daily/archive/16-Apr-2025.md
@@ -0,0 +1,17 @@
+[[Daily]]
+
+1op1 rutger: niet echt iets besproken.
+
+***Interview Prep Isabel***
+*Do you live in Amsterdam? If yes, how long? plans to stay?*
+
+*Very shortly worked for ING (2025 january until now). What happened?*
+
+*At Civir you mention "deployent & administration of cloud technologies" including openstack, does this mean you deplmoyed an openstack cloud or were you a user of an openstack deployment?*
+
+
+*You mention "24/7 support for troubleshooting issues" on multiple positions. What kind of issues?*
+
+***Isabel Q to us***
+
+**Isabel kwam niet opdagen** \ No newline at end of file
diff --git a/.trash/daily/archive/17-Apr-2025.md b/.trash/daily/archive/17-Apr-2025.md
new file mode 100644
index 0000000..be3f6cb
--- /dev/null
+++ b/.trash/daily/archive/17-Apr-2025.md
@@ -0,0 +1,4 @@
+[[Daily]]
+
+`puppet-neutron` gemerged; gezeik met updaten dependency, uiteindelijk commit hash in lock geupdate. Zie [[Debugging issues with updating Puppet dependency]].
+
diff --git a/.trash/daily/archive/18-Apr-2025.md b/.trash/daily/archive/18-Apr-2025.md
new file mode 100644
index 0000000..1b93076
--- /dev/null
+++ b/.trash/daily/archive/18-Apr-2025.md
@@ -0,0 +1,12 @@
+---
+tags: []
+---
+[[Daily]]
+
+# Today
+Encountered oom on lxchosts. Turned out that octavia wsgi was using huge amounts of ram.
+First we disabled the apache on all octavia to prevent more oom kills.
+After that Erik limited their allowed memory usage, and we turned them back on.
+Found out that we can see what script is ran by apache in the vhost config. Turned out to be some CGI script.
+To profile the memory usage I stopped apache and ran a memory profiler directly against the cgi script, I had to stop the LB from using TLS but apart from that it worked smoothly.
+We now have a flamegraph of the memory usage, and it looks like it is something to do with ovs. \ No newline at end of file
diff --git a/.trash/daily/archive/22-Apr-2025.md b/.trash/daily/archive/22-Apr-2025.md
new file mode 100644
index 0000000..6ba73ae
--- /dev/null
+++ b/.trash/daily/archive/22-Apr-2025.md
@@ -0,0 +1,34 @@
+[[Daily]]
+
+octavia ovn provider memory leak; found [bug report](https://bugs.launchpad.net/neutron/+bug/2065460) which looks very promising. Trying to patch the driver with this patchset to see if we can fix it.
+
+
+# Interview Ali met Erik
+Maand opzegtermijn, geen haast.
+
+Noemt control plane services
+- keystone users projects
+werkt voor leaseplan, reason layoff
+
+migratie workflow:
+- legacy cluster
+
+Live migration, ceph shared, kan geen manier vinden om storage zonder intermediate host te migreren.
+
+Knows how live migration work.
+
+Explanation OpenStack, trace server create call:
+- keystone, service catalog, token
+- nova api, nova scheduler, nova conductor
+ - doesnt know individual, whole schedules server
+ - nova libvirt talks libvirt, creates vm
+ - host aggregates mentioned + flavor extra specs
+ - vm calls metadata @ 169.169... mentioned
+ - cloud-init
+- nova wants port -> rabbit -> neutron
+- neutron api, ovn controller
+ - ovn northd, nb, sb
+ - neutron ml2 plugin translates neutron to ovn nb
+ - northd translates nb to sb
+ - ovn controller reads sb and translate to ovs on compute
+- glance image \ No newline at end of file
diff --git a/.trash/daily/archive/23-Apr-2025.md b/.trash/daily/archive/23-Apr-2025.md
new file mode 100644
index 0000000..00e741e
--- /dev/null
+++ b/.trash/daily/archive/23-Apr-2025.md
@@ -0,0 +1,69 @@
+[[Daily]]
+
+# Interview Isabel
+devops engineer 7yrs
+
+Provides maintainance os cloud
+
+exp with openstack:
+ - deploying new compute
+ - maintaining
+ - remember rabbitmq incident: queueing
+ - not used puppet
+ - ansible
+exp with openstack:
+ - deploying new compute
+ - maintaining
+ - remember rabbitmq incident: queueing
+ - not used puppet
+ - ansible
+#### Our questions
+How are you in programming?
+- Really like it, create many tools
+- Python / ansible deploy infra automated instead of manually
+ - *Realised manual labor and automated it*
+
+How do you feel about going more into a development role?
+- that's what im looking for, prefer to be making.
+
+Linux or windows experience?
+- check fs
+- processes
+- administrative
+- k8s many scripts
+
+How would you solve a problem where a VM is not starting?
+- Check nova compute for error
+- If ceilometer/logging check that
+- Try with nova-compute to restart if down
+- Reload instance (?)
+
+Have you ever had to go into openstack DBs?
+- Not really,
+- Do have SQL knowledge
+
+Do you know how to work with git?
+- yes, branch system current job
+
+**Why are you leaving your current position?**
+reason: different tech than banks
+same company HCL as in spain, change contract
+
+
+*Deployment, maintenance, and administration of cloud technologies
+VMware, Azure, Openstack.*
+**Does this mean workloads running on said clouds? Or does this also apply to managing infrastructure such as openstack?**
+
+*Bash scripting for Linux server automation.* **What sort of automation?**
+
+*Plan and execute migrations and patching from on-premises infrastructure
+to ING Private cloud (IPC)* **Can you talk more about what kind of migrations**
+
+#### Isabel questions to us
+
+What does the usage of openstack look like from customer perspective?
+- different kinds (brands, direct access)
+Own DC? yes
+Are you expecting me to create new components, or maintain?
+- maintain, puppet etc,
+
diff --git a/.trash/daily/archive/24-Apr-2025.md b/.trash/daily/archive/24-Apr-2025.md
new file mode 100644
index 0000000..d15f8e4
--- /dev/null
+++ b/.trash/daily/archive/24-Apr-2025.md
@@ -0,0 +1,9 @@
+[[Daily]]
+
+Ceilometer thaw/freeze gecheckt: linear vs unordered flow. Geen metrics testpod -> prod deploy
+
+Octavia / system test cph8 onderzocht: kon netwerk niet vinden in ovn. Restart resolved issue.
+
+OVN database cluster aanmerken in ansible ipv alle netwerk nodes zien als database hosts. Testen op testpod.
+
+Lijst controleren van backup contracten vs gobs. 170 active contracten vs 191 periodic backups waarvan er 44 disabled. \ No newline at end of file
diff --git a/.trash/daily/archive/25-Apr-2025.md b/.trash/daily/archive/25-Apr-2025.md
new file mode 100644
index 0000000..9a00665
--- /dev/null
+++ b/.trash/daily/archive/25-Apr-2025.md
@@ -0,0 +1,19 @@
+[[Daily]]
+
+Gobs OSC plugin pagination
+Deploy goba with ceilometer thaw/freeze to prod
+
+# Tech interview Isabel
+Intro
+- [ ] What we will do: some questions then workshop
+- [ ] Erik intro
+- [ ] Isabel intro
+
+Questions:
+- [ ] What is your experience with `git`, are you comfortable with it?
+- [ ] Can you expand a little bit on your programming experiences, what are some examples of projects that you worked on?
+- [ ] What did you do with Ansible?
+- [ ] Can you talk a little about openstack, what is it and what are the most core components?
+- [ ] Workshop
+
+Isabel questions: