• 1 Post
  • 20 Comments
Joined 6 months ago
cake
Cake day: July 10th, 2025

help-circle
  • skiplists are interesting data structures. The underlying mechanism is it’s a 2-dimensional probabilistic linked list with some associated height ‘h’ that enables skipping of nodes through key-value pairs. So, compared to a traditional linked list that uses a traversal method to search through all values stored. A skip list starts from the maxLevel/maxheight, determines if “next” points to a key greater than the key provided or a nullptr, and moves down to the level below it if it is. This reduces the time complexity from O(1) with a linked list to O(N) where N Is the maxLevel.

    The reason behind why its probabilistic (in this case using a pseudo random number) is because its easier to insert and remove elements, otherwise (if you went with the idealized theoretical form) you would have to reconstruct the entire data structure each and every time you want to add/remove elements.

    In my testing when adding 1,000,000 elements to a skiplist it reduced from 6s search with a linked list to less than 1s!



  • They can’t corrupt Linux, atleast not the larger distros. RPM sponsors fedora development which IBM and other large organizations sponsor RPM. Why don’t they corrupt RPM, because 1.) IBM specialize in cloud services and enterprise infrastructure, IBM has no need to deliver consumer grade products because they already make a shit ton of money. 2.) it would ultimately be fucking themselves over; IBM’s cloud servers run on Linux, if fedora wanted to monetize the OS by adding trackers, they would ultimately be doing a disservice to their business by pissing off IBM. IBM’s infrastructure is based on RPM to pull a shitty move like that would destroy it’s sponsors and userbase.


  • That AI (as in “generative AI”) helps in learning if you give it the right prompt. There is evidence to support that when a user asks AI to implement code, that they (the user) won’t touch it because they are unfamiliar of the code it generated. The AI effectively made a psychological black box that no programmer wants to touch even for a (relatively speaking) small snippet of code to a larger program, that was programmed by another programmer or him.

    To further generalize, I fully believe AI doesn’t improve the learning process, it makes it more accessible and easier for less literate people in a field to understand. I can explain Taylor expansions and power series simplistically to my brother who is less literate and familiar with math. I would be shocked that after a brief general overview he can now approximate any function or differential equation.

    Same applies with chatGPT: You can ask it to explain simplistically taylor and power series solutions, or better yet, approximate a differential equation, it doesn’t change the fact that you still can’t replicate it. I know I’m talking about an extreme case where the person trying to learn Taylor expansions has no prior experience with math, but it still won’t even work for someone who does…

    I want to pose a simple thought experiment of my experience using AI on say (for example) taylor expansions. Lets assume i wants to learn Taylor expansion, ive already done differential calculus (the main requirement for taylor expansions) and I asks chatGPT “how to do Taylor expansions” as in what is the proof to the general series expansion, and show an example of applying Taylor expansions to a function. What happens when I try and do a problem is when I experience a level of uncertainty in my ability to actually perform it, and this is when I ask chatGPT if i did it correct or not. But you sort of see what I’m saying it’s a downward spiral of loosing your certainty, sanity, and time commitment over time when you do use it.

    That is what the programmers are experiencing, it’s not that they don’t want to touch it because they are unfamiliar with the code that the AI generated, it’s that they are uncertain in their own ability to fix an issue as they may fuck it up even more. People are terrified of the concept of failure and fucking shit up, and by using AI they “solve” that issue of theirs even though the probability of it hallucinating is higher then if someone spent time figuring out any conflicts themselves.


  • Privacy reasons. More specifically, I just don’t like using platforms when there are alternatives that don’t compromise my data. In the end, I don’t lose as many features or communities going this route. That said, however, I do miss shitting on people who joined the “christian V atheist” Facebook it’s one of my guilty pleasures. These people can’t have a logical debates, and often times just completely unrelated to Christianity or atheism. so I end up just personally insulting them.









  • "Because I fucking hate my privacy, and Lemmy and other FOSS media platforms is like veggies on my dinner plate – I don’t want it.

    I want people to know when I get my first boner, when I inevitably kick the bucket, and when i announce I got a new position (while users on the platform give context that I was hard the entire interview process). Because why celebrate with family and friends when I got the whole internets asshole comments to read and respond to."

    This is my delusional interpretation of why users don’t join Lemmy.


  • I want to believe you, but the people at my school are abusing it a lot, to the point where i they just give an entire assignment through chatGPT and it gives them a solution.

    The only time I see where it didn’t fully work is using it for my skip list implementation. I asked a LLM to implement a skiplist with insert, delete, and get functionality. What it gave me is an implementation that traversed through the list as a standard linked list: it is unaware of the time complexity concept associated with the skiplist, and implements it as a standard O(1) linked list. It works, but it doesn’t incorporate the “skipping” of nodes. I wonder how many student are shitting in their pants when they realize that the time isn’t being reduced compared to a standard linked list.


  • No, my intention wasn’t to undermine the value of a degree. I’m saying most people priorities for getting a degree, more specifically an engineering degree, is to just have a pay check. On a more related note, there’s a lot of “engineering majors” that use artificial intelligence to code, who don’t actually enjoy the process of learning at my uni.

    So yea, at the rate of adoption and use of generative AI at my school, a pool boy can do what most of the sophomore engineers do.







  • Dark matter isn’t matter, I know shitty name to call something “matter” that isn’t matter, Dark matter is a force. The most common example where dark matter shows up is in astronomy, where galaxy positions aren’t where we calculated them to be, hence there is some external force that is being applied, that we don’t know and haven’t found a way to take into account. I guess we call it “dark matter” instead of “dark force” is because for a force to be applied there must be some mass. Still i think it’s illogical to assume that dark matter is a matter, because we don’t know what force is exerting on it. For all we know it could be the accumulation of other galaxies applying a force on the observed galaxy that we’re simply not taking into account.