There are XXXIX pictures in part LXVII of "Funny Programming Pictures".
IX out of X people reading that sentence just googled "Roman Numeral Converter".
"Mozilla has shifted much of its work toward Al" as funds directed towards African "Digital Justice", "Queer Youth Inclusion", & "Digital Activism for Young Feminists".
Ads are filling the entirety of the Web -- websites, podcasts, YouTube videos, etc. -- at an increasing rate. Prices for those ad placements are plummeting. Consumers are desperate to use ad-blockers to make the web palatable. Google (and others) are desperate to break and block ad-blockers. All of which results in... more ads and lower pay for creators.
It's a fascinatingly annoying cycle. And there's only one viable way out of it.
Looking for the Podcast RSS feed or other links? Check here:
https://lunduke.locals.com/post/4619051/lunduke-journal-link-central-tm
Give the gift of The Lunduke Journal:
https://lunduke.locals.com/post/4898317/give-the-gift-of-the-lunduke-journal
Those in power with openSUSE make it clear they will not allow me anywhere near anything related to the openSUSE project. Ever. For any reason.
Well, that settles that, then! Guess I won't be contributing to openSUSE! 🤣
Looking for the Podcast RSS feed or other links?
https://lunduke.locals.com/post/4619051/lunduke-journal-link-central-tm
Give the gift of The Lunduke Journal:
https://lunduke.locals.com/post/4898317/give-the-gift-of-the-lunduke-journal
I love this! Mainstream tech channels using Lunduke articles POSITIVELY?!
The age of aggressive derangement syndrome is not gone but it seems to be waning at least a little.
One serious problem I've noticed, experimenting with Grok, is that the AI seems to be motivated to tell you what it thinks you want to hear. If you don't anchor your queries in a direct piece of reference material (like a PDF that you can upload to Grok), then eventually, based on the form your questions take, Grok will give you increasingly sophisticated validations of the leading intent in your own questions.
It would be like having a conversation with a friend who doesn't want to offend you. You ask probing questions of him, like, "Isn't such-and-such like so-and-so?" and he always responds (regardless of the content) with "You COULD say that..." and launches into a lengthy rationalisation for whatever it is he's being asked to agree with.
This is a much more subtle and dangerous kind of AI deception than the typical "Bryan Lunduke has clubbed feet" response. Though, it is identical in kind. Bryan's question has it's own answer built into it: "Does Bryan Lunduke have a clubbed foot?" If the...