{"channel":"llm","content":"One of the under-appreciated benefits of using LLMs for coding is that they are experts on *most* topics. \r\n\r\nWriting Trakaido would have been impossible if the LLM did not already have familiarity with Lithuanian. (<red> It was also familiar with dozens of other languages.)\r\n\r\nLanguages are a strong point for large *language* models, but there are many other software projects that can benefit from having domain experts available to all developers at all times at a minimal cost.\r\n\r\n----\r\n\r\nThis applies to non-coding projects as well.  Trying to use LLMs to help file my taxes was ... still a bit optimistic in early 2026 (<mogue> there were several mistakes it made).  But it generally understood what was going on.  *Without* context.\r\n\r\n<red> I have discussed before the question of context v. built-in training data.  Should the *machine* be able to play chess without instructions (or external tools)?  What about backgammon?  Or << a game I just made up >>?  Is there any reason to believe/expect the universal game-playing machine will be a Large Language Model?","created_at":"2026-04-03T16:32:35.389881","id":782,"llm_annotations":{},"parent_id":null,"processed_content":"<p>One of the under-appreciated benefits of using LLMs for coding is that they are experts on <em>most</em> topics. \r</p>\n<p>Writing Trakaido would have been impossible if the LLM did not already have familiarity with Lithuanian. <span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> It was also familiar with dozens of other languages.</span></span>\r</p>\n<p>Languages are a strong point for large <em>language</em> models, but there are many other software projects that can benefit from having domain experts available to all developers at all times at a minimal cost.\r</p>\n<hr class=\"section-break\" />\n<p>This applies to non-coding projects as well.  Trying to use LLMs to help file my taxes was ... still a bit optimistic in early 2026 <span class=\"colorblock color-mogue\"><span class=\"sigil\">\ud83c\udf0e</span><span class=\"colortext-content\"> there were several mistakes it made</span></span>.  But it generally understood what was going on.  <em>Without</em> context.\r</p>\n<p><span class=\"colorblock color-red\"><span class=\"sigil\">\ud83d\udca1</span><span class=\"colortext-content\"> I have discussed before the question of context v. built-in training data.  Should the <em>machine</em> be able to play chess without instructions (or external tools)?  What about backgammon?  Or <span class=\"literal-text\">a game I just made up</span>?  Is there any reason to believe/expect the universal game-playing machine will be a Large Language Model?</span></span></p>","quotes":[],"subject":"LLMs and expertise"}
