--- Log opened Sat Aug 31 00:00:58 2019 09:17 <@Mirage_> Evilpig: pgPool 09:19 <@Mirage_> Worked wonders for the old PoS Sitemason service when we re-architected. I can't remember for sure, but I think Dolemite was the one that turned us on to it because we kept hitting the connection limit for the BD. 09:19 <@Mirage_> er. DB 09:19 <@Mirage_> https://www.pgpool.net/mediawiki/index.php/Main_Page 09:19 < PigBot> pgpool Wiki (at www.pgpool.net) http://tinyurl.com/yyzwxlvu 09:22 -!- Mirage_ is now known as Mirage 09:25 <@Evilpig> Mirage: that there would be nice if we could set up a pool for these things. the issue we're seeing is that in redhat's infinite wisdom they write every task from an ansible playbook run to the database as a unique entry 09:26 <@Evilpig> we have this job that takes 11 minutes to run, and then we watch the db server clear the event spool for the next 10 minutes after the playbooks ended 09:27 <@Evilpig> I think it's just a bad design on their part. we originally had it set up to use a clustered instance but their patching process broke the living shit out of that 09:27 <@Evilpig> I think it's a way for them to try to push us toward putting this in openshift 09:46 <@Dagmar> So, ya'll have seen this shit about Facebook right? 09:47 <@Dagmar> https://twitter.com/wongmjane/status/1167463054709334017 09:47 < PigBot> Jane Manchun Wong on Twitter: "Facebook scans system libraries from their Android app user’s phone in the background and uploads them to their server This is called "Global Library Collector" at Facebook, known as "GLC" in app’s code It periodically uploads metadata of system libraries to the server… https://t.co/K0OOqDgODH" (at twitter.com) http://tinyurl.com/y5qe6576 09:48 <@Dagmar> ...and if you read the entire mess of a Twitter thread, it uploads way more than just metatdata, it uploads entire libraries in great quantity. 09:48 <@Dagmar> Because that doesn't violate both copyright law and the GDPR or anything 09:48 <@Dagmar> Evilpig: It 09:49 <@Dagmar> Evilpig: If it's taking the db ten minutes to settle after Ansible's all done, ya'll might wanna consider it desperately needs some tuning 09:50 <@Dagmar> I've not seen a pg server get like that unless write caching were enabled, because normally all those damn sync() calls do a good job of slowing clients down 09:53 <@Evilpig> best I can tell it's processing a queue from somewhere. we upped the CPUs so it could do more tasks concurrently and it did speed up but it just seems to be too many events.900+ clients writing thousands of events to it over a 10 minute period once an hour. 09:54 <@Dagmar> Yeah so those clients shodlnt' be going away until the db comes back and says their transactions are complete 10:58 -!- xray [~xray@c-73-43-3-64.hsd1.ga.comcast.net] has quit [Ping timeout: 245 seconds] 10:58 -!- xray [~xray@c-73-43-3-64.hsd1.ga.comcast.net] has joined #se2600 13:22 <@Dagmar> It's possible they configured the db to actually not force a sync() after each atomic write, in which case it _will_ take some time for everything to settle 13:54 -!- K`Tetch_ [~no@unaffiliated/ktetch] has joined #se2600 13:58 -!- K`Tetch [~no@unaffiliated/ktetch] has quit [Ping timeout: 258 seconds] 14:59 -!- xray [~xray@c-73-43-3-64.hsd1.ga.comcast.net] has quit [Ping timeout: 258 seconds] 15:00 -!- xray [~xray@c-73-43-3-64.hsd1.ga.comcast.net] has joined #se2600 16:05 -!- K`Tetch [~no@unaffiliated/ktetch] has joined #se2600 16:08 -!- K`Tetch_ [~no@unaffiliated/ktetch] has quit [Ping timeout: 246 seconds] 16:41 -!- K4k [elw@unaffiliated/k4k] has quit [Ping timeout: 258 seconds] 19:08 -!- Warcop [~josh@mobile-166-173-251-236.mycingular.net] has joined #se2600 19:09 -!- NotWarcop [~josh@mobile-166-172-59-107.mycingular.net] has quit [Ping timeout: 245 seconds] 19:40 <@Evilpig> fantastic! the latest episode of scooby doo and guess who has weird al 20:01 <@Mirage> lol 21:13 -!- crashcartpro [uid29931@gateway/web/irccloud.com/x-uitrecatxhraonlx] has joined #se2600 23:22 -!- crashcartpro [uid29931@gateway/web/irccloud.com/x-uitrecatxhraonlx] has quit [Quit: Connection closed for inactivity] --- Log closed Sun Sep 01 00:00:00 2019