--- Log opened Mon Jul 10 00:00:49 2023 01:01 * aestetix hugs Evilpig 01:01 < aestetix> turn that frown upside down 06:15 <@Dolemite> mr0ning, be0tches and h0ez! 07:47 <@Dolemite> Yay, I have a working stove, again 08:05 <@Dagmar> Hooo leee shit @ Franklin 08:05 <@Dagmar> https://www.newschannel5.com/news/franklin-police-working-to-identify-several-children-raped-by-local-soccer-coach 08:05 < PigBot> Franklin Police working to find children suspected of being raped by coach (at www.newschannel5.com) https://tinyurl.com/29fmtkvc 08:06 <@Dolemite> They are likely to find a body 08:22 <@Mirage> Evilpig: yeah, I snagged a similar one a while back. There were supposed to be workarounds for SOME of that serious to get it to work w/ zoneminder, but I never had any luck. 08:24 < Evilpig> For my purpose I think i'm gonna be able to make it work to discourage any further car breakins. The light turning on when it detects motion is VERY visible 08:28 < Evilpig> using their desktop app I got it to save the email settings without the password and it was able to send me video clips. so i'm thinking that I can set a rule up in outlook to shuffle those into a folder, then set up a script to go nab them and remove from the server 08:33 <@Mirage> The one I got only likes 2.4Ghz networks. It claims to see the 5Ghz, but won't connect to it. 08:41 < Evilpig> I think this got on my 5g. I need to verify though 08:42 < Evilpig> 6 (11ng) 08:42 < Evilpig> 2.4 08:45 < Evilpig> I guess I could try locking it to the 5g network. I have one dedicated for the shields 08:46 <@Mirage> Yeah, this is the one I got. https://www.amazon.com/gp/product/B07W6PXRR8/ 08:46 < PigBot> Amazon.com (at www.amazon.com) https://tinyurl.com/2dzcz9oc 08:46 <@Mirage> "Reolink Argus PT w/ Solar Panel" 08:46 < Evilpig> I just got the newer model of that 08:47 <@Mirage> Yeah 08:47 < Evilpig> other than all the bad things i've said about it. I like that it came with straps for the mounts. 08:47 < Evilpig> going to use those to install it here so that way the property manager can't bitch about me attaching things to the outside 08:49 < dasunt> Thinking about my new homelab setup. Got three boxen - A, B, and C, that will be in a proxmox cluster. A, B, and C have static IPs. Does anyone see a problem if I put OpenLDAP on the physical boxen, yet DHCP/DNS as a VM? 08:50 < dasunt> Note /etc/hosts have hardcoded entries for each of the physical boxes on all three machines. So I'm trying to think what would break if the VMs failed to come up. 08:52 <@Mirage> I have DHCP/DNS splitSo that if the VM dies things on the network can still work. Losing power and then trying to bring up WM's w/o working DNS/DHCP once was more than enough. 08:52 < Evilpig> if you put your dhcp on whatever that stinkin thing is that replaced dhcpd you can put it on multiple boxes with a shared sql backend 08:53 < Evilpig> kea 08:53 < Evilpig> I have to get migrated over to that eventually. 08:53 <@Mirage> Being the crazy person I am, I still use ISC-DHCP and just setup OMAPI 08:55 < Evilpig> I'm still using dhcpd myself but I manage that config by hand like a crazy person 09:05 < dasunt> Yup, kea. 09:05 < dasunt> Not sure how clustered DHCP in kea works with clustered DNS for DDNS. 09:07 < dasunt> Reading the documentation, it seems like Bind9 still does primary/backup configuration, so if the primary goes down, I'm not sure if DDNS will still work. 09:08 <@Dagmar> It will 09:08 <@Dagmar> So... Dynamic DNS updates from DHCPd into BIND are much less magical than you'd expect 09:09 <@Dagmar> It's like, an almost _normal_ rndc connection that lets the DHCPd update pretty much whatever the fuck records it wants to 09:10 < dasunt> I've done it with one ISC-DHCP/Bind9 setup. Not sure how Bind9 does updates if the primary dies. 09:10 <@Dagmar> ...and with each new lease, it stores an A record, a PTR record, and the weird thing I wish I could filter from queries but I can't... the DHCID record, which is the only reason the thing knows later that what it's looking at is a dynamically-generated entry 09:12 <@Dagmar> It doesn't throw a tantrum when you tell it about more than one, but I really doubt it's going to push an update to more than one of what it's told about, _and_ that would be very fucking wrong 09:12 < dasunt> I do know that Kea does some fuckwittery with how it gives out leases, but I haven't tested it in practice. From what I read, it looks like Kea waits until it stores the assignment in its database before it replies to the DHCP request, but how that works if part of the cluster goes down, IDK. 09:12 <@Dagmar> You do not make updates on thes secondary (slave!) server 09:13 <@Dagmar> I have seen people trying to do "multi-master" things which should just be called "cluster fuck configuration" 09:13 < dasunt> Or if the primary DNS goes down. 09:13 <@Dagmar> It's never going to work out well 09:13 < dasunt> I didn't think Bind9 did multimaster well. 09:13 <@Dagmar> Fuckin' nothing does "multi-master" well 09:14 <@Dagmar> It always introduces race conditions unless you've got shit perpetually in sync and synchronized very carefully, which is the purview of actual databases and people still pay money for that 09:14 <@Dagmar> DNS was never meant to be quite that damn complex 09:15 <@Dagmar> Going that way is definitely on the far side of "Okay, the whole goddamn thing needs to be moved into a real database now" 09:15 <@Dagmar> Thankfully there are saner solutions to this shit 09:15 <@Dagmar> LIke, "how hard is it to keep a single goddamn nameserver instance running" 09:16 <@Dagmar> If you need multiple "master" DNS servers involved in some shit like this, _cheat_ 09:16 < dasunt> I agree with you, in an enterprise setting. 09:16 <@Dagmar> Put the actual master for the zone on the same damn instance as the DHCPd, and have all the other "master" servers actually be slaves to that 09:16 <@Dagmar> Same shit that would normally be done for annoyingly complex DNS-only setups 09:17 <@Dagmar> NOTIFY will happen from the unpublished master to the published (accessible) masters in almost real-time as per usual 09:19 <@Dagmar> SImply put, someone gets a lease, the DHCPd updates records in a BIND instance that's so close to it they might as well be one piece, and then that thing goes and propagates those changes out to the nameservers the rest of the network thinks of as authoritative 09:19 < dasunt> This is almost why I want to keep everything as VMs. 09:19 <@Dagmar> Where I'm stuck at the moment is that the docs and what is presently available for Kea is an absolute shitshow 09:20 < dasunt> I just *don't* want to create a circular dependency where the physical servers rely on the VM cluster being up. 09:20 <@Dagmar> ...which pissses me off because I'd _just_ gotten some of the basics worked out for a modestly scalable DHCP/DNS management interface which doesn't require someone to take the entirety of Infoblox up the butt 09:21 <@Dagmar> Cuz at the moment there's bullshit that requires people to manually edit text files, and then there's overly complex nightmaeres like Infoblox with absolutely no middle ground 09:21 <@Dagmar> ...but like, using dynamic DNS updates and BIND isn't actually as complex as it would at first appear 09:22 <@Dagmar> You're just not going to get your DHCPd instances updating the same zone on mulitple master servers itself. That's very outside the scope of what it's supposed to be doing 09:22 <@Dagmar> Update one instance, let that instance update the others... _as BIND was designed to do_ 09:23 <@Dagmar> _That_ is a thing that will run smoothly and reliably 09:24 <@Dagmar> Seriously tho they really fucked me by deciding to announce they're sunsetting the products everyone is using and then doing this big ol' mess with Kea 09:25 <@Dagmar> I am kinda hoping at some point very soon tho that BIND will get the abiliy to apply ACLs to record types, because the DHCID records aren't something anything but DHCPd should give a fuck about 09:25 <@Dagmar> Not sure how keen I am of having a convenient indicator to know which hosts are using DHCPd and which hosts are static 09:26 <@Dagmar> Doesn't feel like good opsec 09:27 < dasunt> I have the unusual situation where I'm not dealing with a lot of servers (VM or otherwise), but I do tend to create/destroy servers a lot in the lab environment. 09:27 <@Dagmar> You and me both 09:28 < dasunt> And they tend to be tiny (512MB RAM, 1 vCPU, 20GB disks) 09:28 < dasunt> Unless I have to run Redhat's shit, then they get larger, because RH seems a bit bloated. 09:28 < dasunt> Or Windows shit. 09:28 <@Dagmar> I blame systemd 09:28 <@Dagmar> ...because it's definitely fuckin' systemd 09:29 <@Dagmar> I haven't seen sign that the people responsible for the result have the sanity required to do it properly 09:29 < dasunt> IDK. Debian is systemd as well, it still works fine with half a gig of ram. 09:30 < dasunt> I love to hate on SystemD, but I think RH's bloat is more than that. 09:30 <@Dagmar> If one suggests "Hey, maybe virtuals don't need that fuckin' complexity so we could make our virts smaller" their universal response appears to be "Now is the time for you to transition to our containerization product!" 09:30 <@Dagmar> ...and my response to that will always be "No, and fuck you for presuming you already know my project requirements" 09:31 <@Dagmar> Systemd is practically 800Mb of it all by itself 09:31 < dasunt> Kubernetes: Containers should do one thing and one thing well, and they should be unimportant enough that scaling is a matter of adding more instances, and any problems can be dealt with by killing/restarting them. 09:31 <@Dagmar> Without systemd it becomes almost easy to have a minimal install system that only sits on about 260-300Mb of RAM 09:31 < dasunt> Every fucking person who makes a container: This is a very important container, and does everytying. Also, it doesn't scale well. 09:32 <@Dagmar> That's because they use containers as an excuse and mechanism for hiding shitty architecture 09:32 < dasunt> Yup. 09:33 <@Dolemite> s/fucking person/fucking idiot developer/ 09:33 < dasunt> Don't forget the dependency hell they are covering up with containers - shit like needing v1.3.5-beta-some-guy-in-Nebraska's-fork. 09:33 <@Mirage> On mine I just stuck w/ the sane stuff...single master, multiple slaves. DHCPD has a key and sends ddns updates to the master, which then replicates it out whenever. The main thing for me if I have my master or the whole esx environment go down is that I can 1) Still get an IP easily, 2) Easily be able to access the internet, 3) be able to resolve internal stuff via DNS name (so that I don't run into 09:33 <@Mirage> HSTS issues w/ my esxi boxes and vCenter) 09:33 <@Dagmar> I keep shit out of containers and in full-fledged instances because I want them to be entirely and 100% independent from other processes 09:33 <@Dolemite> There are some well made containers, but usually by the people who actually do the Ops more than the Dev in DevOps 09:33 <@Mirage> Evilpig: I of course do everything from the command-line as well for DNS/DHCP. 09:33 <@Dagmar> So, not DevOops people then 09:33 < dasunt> I love containers, I just hate people using containers. 09:34 <@Dolemite> I, too, like the portability of containers - when done correctly 09:34 <@Dagmar> Yeah I'm not averse to the idea of containers, but I do object to shitbirds using it as an excuse to noob things up, and other shitbirds trying to tell me my business 09:34 < dasunt> Follow the goddamn philosophy behind why we're using solutions like Kubernetes, and shit actually works. 09:34 <@Dolemite> ^^^ 09:35 <@Dagmar> Like, my personal favorite is that they've got six whole containers, and the only reason they've got those is because they leveraged a management process "framework" thats' more heavyweight than all six containers 09:35 <@Dagmar> What the fuck 09:35 <@Dagmar> How do people not see that as an obvious problem 09:35 < dasunt> Heh, we have one cluster just for one automation platform because the developers only released a containerized version. 09:36 <@Dagmar> They're just playing shell games, and I don't mean bash games. I mean 3 Card Monte as system administration 09:37 <@Dagmar> Point at a "tiny, lightweight" thing they've declared as one-size-fits-all and ignore the fact that it has the overhead and additional complexity noticeably larger than that which happens without it 09:38 < dasunt> It's how corporations work. At a certain level, it's less about doing (and doing it right) and more about the illusion. 09:38 <@Dagmar> I'm just hoping I won't become the reason people say "Sometimes the actual full stack people carry knives" 09:39 <@Dagmar> It does absolutely make me want to "cut a bitch" tho 09:40 < dasunt> I've bitched about it before, but there are entire teams that are successful because they avoid accountability and responsibility, yet give the illusion of doing so. 09:41 < dasunt> "So you're the XYZ team? Does that mean you support for it? No, that's what L1 does? Okay, implementation? Oh, that's the architecture team? Well, configuration? Oh, that's what the paid support does? WTF DO YOU DO?" 09:44 < dasunt> I wish I was joking. 09:47 < dasunt> Oh, in happier news, 1TB mSATA SSDs in M.2 format are down to $50 from what should be a reputable brands. 09:56 <@Dagmar> Yeah Samsung has been dipping into that range with some of their units during sales in the last month 09:57 <@Dagmar> I gotta hurry up and get this damn 2Tb 970EVO into my wife's machine before she's asking me why I didn't spend more money to upgrade her to a 4Tb 09:57 <@Dagmar> Adobe has _clearly_ shifted to deciding everyone should use goddamn SSDs 10:09 < dasunt> Heh. 10:10 < dasunt> So from what I've been learning is how much POS a lot of the cheaper stuff is. 10:11 < dasunt> Not so much speed - I understand that. More about what features it does and doesn't support. 10:11 <@Dagmar> Yes. No-name SSDs (particularly m.2) are a shortcut to disaster recovery training 10:13 < dasunt> Was watching someone talk about energy efficiency in their home lab, and they had a problem getting the lower power-saving states for cheap drives - it wasn't supported. In a large environment, that may be NBD - the drives could always be active. In a small environment, there's a use case to keep some storage in a lower power state in places with higher energy costs. 10:15 < dasunt> Although, TBH, I don't know what the hell we're using for colder storage. I'd assume tapes, but *shrug* 10:25 <@Mirage> I dunno why, but this just popped into my head: "Never underestimate the bandwidth of a station wagon full of hard drives." 10:39 <@Dolemite> Yeah, when I went to college they still said backup tapes instead of hard drives 10:39 < dasunt> Backup tapes are still commonly used, as far as I know. 10:40 < dasunt> It's cheap, long term storage. 10:44 < dasunt> So funnies with OpenAI's ChatGPT - I'm told it was trained on Reddit as well as other sources, and a bit of preliminary testing seems to indicate that it can mimic the style of prolific reddit posters. 10:52 < dasunt> Holy shit, this meeting goes on for far too fucking long because *preparing* for meetings is impossible for our team. 10:53 < dasunt> 8 people, 90 minutes so far. So we've lost 1.5 FTE-days. 10:55 <@Mirage> dasunt: if you were at VU, they would identify that as a problem and schedule another meeting to discuss it at least 1-2 times a week 10:57 <@Mirage> That crap used to just kill me w/ the Project Managers. "Why haven't you gotten ZYX done yet?" "I've been busy with tickets and all the meetings on my calendar make it hard to get time to do it." "Do we need to schedule another meeting to discuss this and maybe escalate it to ?" 10:58 <@Mirage> what was his name...Steve Bryant? Heavy set bald dude.. He was the WORST about that. 10:58 <@Mirage> His favorite thing to say was "I'm gonna have to flag the project as [yellow|red]!" 11:05 <@Dolemite> heh, Judy was the only PM I ever had to deal with since she handled anything construction related 11:06 <@Mirage> Meetings were scheduled at the CP? Which meeting room did she refer to it as? 11:07 <@Dolemite> Oh we usually met in the data center because other than me, it was nothing but construction folks 11:59 < dasunt> Did I say 90 minuts? 150 minutes. 11:59 <@Dolemite> You should schedule a meeting to discuss the lost time due to meeting overruns 12:00 < dasunt> Dolemite: I'e actually brought up the idea that maybe we could do a bit of groundwork to make our meetings more efficient. But nope. 12:00 < dasunt> The solution to too many meetings is not to discuss how we suck at meetings. 13:17 <@Mirage> The key to a successful meeting, IMO, is having one person who is willing to tell ppl to shut the fk up when they get off track, has no problem with calling someone out for not being prepared, and has the power to be able to re-assign work to someone else if a resource can't do what they were supposed to. 13:18 <@Mirage> Oh, and ending early as often as possible 15:20 < dasunt> Mirage: Yuppers. 15:20 < dasunt> Also, y'know, being prepared beforehand. 15:20 < dasunt> But hey, it's great when we waste five or ten minutes trying to find a file. 15:21 <@Mirage> stepmother posted this stupidity.. https://i.postimg.cc/ZRmYdsKG/wtf.png 15:21 < PigBot> wtf — Postimages (at i.postimg.cc) https://tinyurl.com/255luen8 15:22 < dasunt> Nazi like posting detected! 15:23 < dasunt> Let me guess, she thinks the people who really have it hard are the white people... 15:24 <@Mirage> It's funny to me how much of a racist she is when she got ALL bent out of shape when my grandmother made a (mostly) innocent off-hand comment about a hispanic relative of hers after the wedding to my father. 15:37 < dasunt> That reminds me of a story when someone I knew discovered their mother didn't approve of gay marriage, and he told her that was like not accepting interracial marriages in this day and age, just to discover his mother had a problem with that as well. 15:41 < dasunt> https://i.postimg.cc/ZRmYdsKG/wtf.png 15:41 < PigBot> wtf — Postimages (at i.postimg.cc) https://tinyurl.com/255luen8 15:41 < dasunt> Whoops. 15:41 < dasunt> Glad I accidentally posted the link in the same channel I got it from, else that would have been embarassing out of context. 15:41 <@Mirage> yeah 15:57 <@Mirage> This explains the trends in cars/trucks getting bigger and bigger.. https://youtu.be/azI3nqrHEXM 15:57 < PigBot> Why We Can't Have Small Trucks Anymore - Blame the EPA - YouTube (at youtu.be) https://tinyurl.com/2az3dwff 16:18 < Synx_> anybody with pixel buds 16:18 < Synx_> ? 16:19 < Synx_> I have the series a and the case drains itself in about a day and a half 16:27 < Evilpig> I have a pair 16:27 < Evilpig> weird. my case holds charge pretty good. the buds themselves are good for about 2.5 hours though which is shitty 16:39 < Synx_> that sucks 16:41 < Synx_> i got my spouse the original alexa buds with anc, those are fairly decent when we can find them at the moment they have gotten themselves lost, bastards 16:43 < Synx_> Evilpig, does you case slow flash its white light 24/7 when not in use? 17:58 <@Mirage> bleh. 91 outside w/ ~70% humidity... Heat Index ~110. 18:10 < Evilpig> no 18:10 < Evilpig> the white flash is supposed to mean it's in use or charging 18:30 <@Dagmar> Man I'm surprised the Franklin soccer coach situation isn't already on CNN 18:31 <@Dagmar> Mirage: I figured something along those lines was going on when I noticed how many dealerships had transitioned to like 90% "big-ass SUVs" 18:36 <@Dagmar> Now it makes sense that it would be the natural outcome of relaxed EPA restrictions on big-ass SUVs 18:36 <@Dagmar> LEss EPA trouble, more profit 18:37 <@Dagmar> All dealerships had to do was talk redneckes into buying giant-ass trucks, which clearly most of them did 21:29 < dasunt> Mirage: I call bullshit. You can still buy smaller trucks, but they are basically fleet vehicles. 21:32 < dasunt> The fact that they are basically fleet vehicles seems to indicate there's not a huge demand for them. 22:22 < aestetix> Evilpig: do you use your 100 TB nas as a seedbox too 22:49 < Evilpig> yes I seed things out 22:49 < Evilpig> some things alot, other things not so much 23:48 < aestetix> hmm 23:48 < aestetix> I wish I could do that 23:48 < aestetix> torrenting is super highly illegal in germany, plus my internet is not good enough for it 23:48 < aestetix> thus I have to have a seedbox 23:48 < aestetix> I would love a 100 TB seedbox --- Log closed Tue Jul 11 00:00:50 2023