• 0 Posts
  • 32 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
rss
  • Absolutely air traffic in the sky should be identified. There is no problem with that, but it’s the idea that it is too easy to find out everything about an aircraft owner by simply seeing the number on their tail.

    The rich guys obfuscate that info with shell corps to own the aircraft.

    Shouldn’t everyone have the right to the same level of privacy regardless of how much money they have?



  • It is different because you typically need to know the municipality I live in first.

    Also the registration allows anyone to track me anytime I fly.

    How would you feel if you had a public gps transponder on your car publicly showing who you, where you are, and where you live? Also what if you are required to plaster that registration number on the side of your vehicle in large letters that can be seen from a block away?

    It’s a massive invasion of personal privacy.



  • This is actually most helpful to the little guys that own $20,000 airplanes.

    I have a small airplane and it’s always bothered me that my name and address are publicly accessible through the FAA registry.

    Most pilots I know are careful about photos they publish online showing their tail number printed in large bold letters on either side of the aircraft. This registration number can be entered into websites like flightaware.com and someone is literally two clicks from seeing my full name and home address.


  • Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven’t actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

    The real innovation that isn’t commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It’s an absolute game changer and I’m surprised OpenAI has released their own MLA model yet.

    While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can’t be accused of using copyrighted text to learn how to write, then AI shouldn’t either. Just my hot take that I know is controversial outside of academic circles.


  • Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

    It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

    Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)


  • I believe it could and should be made harder, but it is already a high barrier to purchase an investment property. For a business loan on residential housing, an investor needs 25-30% down payment for the property. Also I think the longest terms are 15 years and not 30, but I could be wrong.

    All the small time landlords acquired their homes through primary residence loans which allows for PMI and smaller down payments that only exist because they are subsidized by the government. A primary residence loans either requires an owner to lie to the government and bank which puts them at serious liability in the sense they could make the loan due immediately if found out, or the owners have lived in that home for at least one year.



  • Based on the amount of vitriol I’ve personally received on this site for renting one property while I am temporarily relocated to attend school, the answer is yes.

    For some reason everyone views being a landlord as easy money. But in reality returns on investment are worse than the stock market for being the landlord of a single family home.

    Edit: Isn’t it funny how the critics below didn’t even ask questions about a specific situation where it does make sense to rent out an owned home? Instead of trying to understand why someone might make the choice they make, they sling insults and make wide sweeping assumptions to reinforce their skewed world view. Honestly it’s this shit that’s why Trump won. Leftists can’t see the forest for the trees and are willing to engage in ever escalating purity tests that only alienate other sympathetic voters to leftist causes.

    I worked hard to be able to own my own house. Saved money and took out a loan. I never received a penny from my parents or some inheritance from a family member that died. A greater return on investment can absolutely be made by investing in the SP500, returns on investment for single family homes will be worse. The SP500 can be expected to rise an average of 10% per year. A single family home on the other hand will increase by 4.3% per year. With interest rates being higher than that level appreciation, there is effectively no profit from the leverage that can be typically seen by borrowing money. Renting is typically 37% cheaper than buying on a month-to-month basis. Owners don’t expect to Break-even on a home until after 5-10 years of ownership (depending on the city). Over 2/3 the cost of a mortgage go towards loan interest and taxes. Now what does a house get you then if there are all these downsides? Freedom. Freedom to decorate how you choose. To remodel, to build a deck, install Ethernet throughout the house, add an extension. But most of all, it gives long-term stability. After that 5 year period where a homeowner is taking a loss because of buying, they are finally ahead financially of a renter. This is why it doesn’t make sense to sell a home due to short-term circumstances, because owning a home is inherently a long-term benefit. Especially when one loses 10% of the the value of a home selling it when it would take 3 years for the home to even grow to the point where that cost is covered by increases in home value, which is not even remotely guaranteed, as evidenced by home values only increasing 0.12% after falling by 5% the previous year.





  • SLS is on track to be more expensive when adjusted for inflation per moon mission than the Apollo program. It is wildly too expensive, and should be cancelled.

    This coupled with the fact that the rocket is incapable of sending a manned capsule to low earth orbit which is the the lunar gateway is planned to a Rectilinear Halo Orbit instead.

    Those working in the space industry know that SpaceX’s success is not because of Elon but instead Gwynne Shotwell. She is the President and CEO of SpaceX and responsible for all things SpaceX. The best outcome after the election is to remove Elon from the board and revoke his ownership of what is effectively a defense company for political interference in this election. Employees at SpaceX would be happy, the government would be happy, and the American people would be happy.


  • The technical definition of AI in academic settings is any system that can perform a task with relatively decent performance and do so on its own.

    The field of AI is absolutely massive and includes super basic algorithms like Dijsktra’s Algorithm for finding the shortest path in a graph or network, even though a 100% optimal solution is NP-Complete, and does not yet have a solution that is solveable in polynomial time. Instead, AI algorithms use programmed heuristics to approximate optimal solutions, but it’s entirely possible that the path generated is in fact not optimal, which is why your GPS doesn’t always give you the guaranteed shortest path.

    To help distinguish fields of research, we use extra qualifiers to narrow focus such as “classical AI” and “symbolic AI”. Even “Machine Learning” is too ambiguous, as it was originally a statistical process to finds trends in data or “statistical AI”. Ever used excel to find a line of best fit for a graph? That’s “machine learning”.

    Albeit, “statistical AI” does accurately encompass all the AI systems people commonly think about like “neural AI” and “generative AI”. But without getting into more specific qualifiers, “Deep Learning” and “Transformers” are probably the best way to narrow down what most people think of when they here AI today.


  • This is truly a terrible accident. Given the flight tracking data and the cold, winter weather at the time, structural icing is likely to have caused the crash.

    Ice will increase an aircraft’s stall speed, and especially when an aircraft is flown with autopilot on in icing conditions, the autopilot pitch trim can end up being set to the limits of the aircraft without the pilots ever knowing.

    Eventually the icing situation becomes so severe that the stall speed of the ice-laden wing and elevator exceeds the current cruising speed and results in a aerodynamic stall, which if not immediately corrected with the right control inputs will develop into a spin.

    The spin shown in several videos is a terrifying flat spin. Flat spins develop from normal spins after just a few rotations. It’s very sad and unfortunate that we can hear that both engines are giving power while the plane is in a flat spin towards the ground. The first thing to do when a spin is encountered is to eliminate all sources of power as this will aggravate a spin into a flat spin.

    Once a flat spin is encountered, recovery from that condition is not guaranteed, especially in multi-engine aircraft where the outboard engines create a lot of rotational inertia.




  • I am an LLM researcher at MIT, and hopefully this will help.

    As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

    The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

    This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

    This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

    Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

    This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

    From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

    —-

    +more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.


  • Agreed.

    Nevertheless, the Federal regulators will have an uphill battle as mentioned in the article.

    Neither “puffery” nor “corporate optimism” counts as fraud, according to US courts, and the DOJ would need to prove that Tesla knew its claims were untrue.

    The big thing they could get Tesla on is the safety record for autosteer. But again there would need to be proof it was known.