• 1 Post
  • 317 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • Is your goal to create things that can be published or used in a project, or to create audiobooks for yourself to listen to?

    For voiceovers for text, I use Kokoro Fast API, which has a web frontend. The frontend is only compatible with Chromium browsers on desktop or Android, which sucks as my daily driver is Firefox and an iPhone (there are workarounds in the thread) but it supports voice mixing, speed changes, etc… It also has an issue where it keeps the models (about 3GB) in memory; I keep the CPU version loaded normally and swap to the GPU version if I need it to be faster. If you want something similar for Bark, check out Bark-GUI.

    I’ve also dabbled a bit in some TTS features that have Comfy nodes, though at this point mostly just in terms of getting them set up. For my purposes thus far Kokoro has been fine (and I prefer the FastAPI project over the Comfy nodes for most of my uses), but I’ve found nodes for Kokoro, Dia, F5 TTS, Orpheus, and Zonos.

    Autiobooks and audiblez both look promising. A few weeks ago, I used the Kokoro FastAPI web frontend to create an audiobook for an ebook I worked on that used entirely self-hosted AI generation for the outlining and prose. Audiblez, which I found about like two days after that, looks like it would have simplified that process substantially. Still, I’d personally like something more like an audiobook studio, where I can more easily swap voices back and forth, add emotions, play with speed on a more granular level, etc… I’m thinking about building something that contains that at some point myself, but it’ll be a minute - hopefully someone else will beat me there.

    I posted a comment here a few weeks back on a similar topic. I’ve since used OpenReader-WebUI and like it, though that’s not for producing audiobooks, but for a read-along experience. Reproducing the comment below in case it’s helpful for you:

    If you want to generate audiobooks using your own / a hosted TTS server, check out one of these options:

    • OpenReader-WebUI - this has built-in read along capability and can be deployed as a PWA that can allow you to download the audiobooks to your phone so you can use them offline
    • p0n1/epub to audiobook
    • ebook2audiobook If you don’t have a decent GPU, Kokoro is a great option as it’s fast enough to run on CPU and still sounds very good. If you’re going to use Kokoro, Audiblez (posted by another commenter) looks like it makes that more of an all-in-one option. If you want something that you can use without an upfront building of the audiobook, of the above options, only OpenReader-WebUI supports that. RealtimeTTS is a library that handles that, but I don’t know if there are already any apps out there that integrate it. If you have the audiobook generation handled and just want to be able to follow along with text / switch between text and audio, check out https://storyteller-platform.gitlab.io/storyteller/

  • Right now I have Ollama / Open-WebUI, Kokoro FastAPI, ComfyUI, Wan2GP, and FramePack Studio set up. I recently (as in yesterday) configured an API key middleware with Traefik and placed it in front of Ollama and Comfy, but currently nothing is using them yet.

    I’ll probably try out Devstral with one of the agentic coding frameworks, like Void or Anon Kode. I may also try out one of the FOSS writing studios (like Plot Bunni) and connect my own Ollama instance. I could use NovelCrafter but paying a subscription fee to use my own server for the compute intensive part feels silly to me.

    I tried to use Open Notebook (basically a replacement for NotebookLM) with Ollama and Kokoro, with Kokoro FastAPI as my OpenAI endpoint, but turns out it only supported, and required, text embeddings from OpenAI, so I couldn’t do that fully on my local. At some point, if they don’t fix that, I’m planning to either add support myself or set up some routes with Traefik where the ones OpenNotebook uses point to the service I want to use.

    ETA: n8n is one of the services I plan to set up next, and I’ll likely end up integrating both Ollama and Comfy workflows into it.







  • I think the better question than “Does the experience system sound like it has potential,” then, is “Does the overall concept / system have potential?”

    My gut is probably, but it depends a lot more on what you’re willing to put into it and what you want out of it. What’s your metric for success? If it’s something you want to run yourself and to share online to have a few groups use it, then that’s a lot more achievable than being able to get a publishing deal, for example. And in-between, publishing on drivethrurpg or something similar, at a nominal cost (like $2-$5), would take more effort than the former and less than the latter; and the higher the cost and the higher the number of players you’d want, the higher the effort you need to put in (and a lot of that isn’t just in system building, but in art, community building, marketing, etc.).

    From what you’ve shared, it sounds like an interesting system. I could especially see it working in an academy setting where grinding skills to be able to pass practical exams is one of the players’ goals. I also could see it working well by a loosely GMed play by post system, with the players self-enforcing (or possibly leveraging some tools built into the site to track resource pools, experience, rolling, etc.), though I haven’t played in a forum game myself, so I might be way off-base.

    Did your system have classes or was it completely free-form in terms of gaining access to those skill trees?


  • I run a Monster of the Week game and my players get experience throughout sessions, as well as at the end. The mechanics are basically:

    • It takes 5 experience points to level up.
    • If you fail a roll, you get an experience point.
    • If you level up, you get the benefit immediately.
    • At the end of the session, everyone gets 0-2 experience points.

    I think other PbtA (Powered by the Apocalypse - systems inspired by Apocalypse World) systems do something similar.

    I grew increasingly frustrated with the system of only distributing advancement/experience points at the end of a session.

    Isn’t the simple fix to this to just distribute experience points as soon as they’re earned?

    At some point, I started to divise a play system that relied on a split experience atribution system, with players being able to automatically rack experience points from directly using their skills/habilties, while the DM would keep a tally of points from goals/missions achieved, distributable at session end.

    Your system sounds like the way that skill-based video game RPGs (Elder Scrolls games and Arcanum come to mind) handle experience.

    In a lot of games I’ve played, I’d rather get experience for in-game accomplishments immediately and to be able to train skills like this during downtime - generally between games.

    To those with more experience in TTRPGs: would this be feaseable? Or enticing? Interesting?

    I could see people being interested in it. You get instant gratification and a bit of extra crunchiness. A lot of players enjoy that.

    With the right skill system I could see this being useful. My main concern is that if you put this on top of a system with relatively few skills, it could encourage people to game it by grinding. There are ways to mitigate that, though.

    In a system with fewer skills, instead of just being experience points, the “currency” you earned this way could be used for temporary power ups related to the skill in question.

    You could also limit it so you only rewarded players for story-related tasks.


  • Copied from the post:

    You may have seen reports of leaks of older text messages that had previously been sent to Steam customers. We have examined the leak sample and have determined this was NOT a breach of Steam systems.

    We’re still digging into the source of the leak, which is compounded by the fact that any SMS messages are unencrypted in transit, and routed through multiple providers on the way to your phone.​

    The leak consisted of older text messages that included one-time codes that were only valid for 15-minute time frames and the phone numbers they were sent to. The leaked data did not associate the phone numbers with a Steam account, password information, payment information or other personal data. Old text messages cannot be used to breach the security of your Steam account, and whenever a code is used to change your Steam email or password using SMS, you will receive a confirmation via email and/or Steam secure messages.​

    You do not need to change your passwords or phone numbers as a result of this event. It is a good reminder to treat any account security messages that you have not explicitly requested as suspicious. We recommend regularly checking your Steam account security at any time at ​

    https://store.steampowered.com/account/authorizeddevices

    We also recommend setting up the Steam Mobile Authenticator if you haven’t already, as it gives us the best way to send secure messages about your account and your account’s safety.


  • Assuming you’re using ollama (is there another reason to use ollama.com?), you can use compatible files from huggingface directly in ollama. The model page will give you the instructions for the command to run; I always change ollama run to ollama pull , though. Instructions: https://huggingface.co/docs/hub/ollama

    You should be able to fit Qwen3 32B at Q4_K_M with an acceptable context, and it did very well on math benchmarks (with thinking enabled). You can disable thinking by including /no_think at the end of your prompt to speed up responses, but I’m not sure how well it handles math under those circumstances. I wouldn’t even consider disabling thinking unless you were grading one question per prompt.

    The ollama Qwen3 page is https://ollama.com/library/qwen3:32b and the default 32B quant is Q4_K_M. I personally am using the Q6_K quant by unsloth, and their quants have been great (when supported by ollama), often being the first to fix bugs impacting other quantizations.

    I’m not sure if Q4_K_M is the optimal quant style for Intel Arc, but the others that might be better are not supported by ollama, anyway, as far as I know.

    Qwen3’s real world knowledge is bad, so if there are questions that rely on that you may need to include the relevant facts as part of the prompt or use an ollama frontend that supports web searches.

    Other options: This does seem like something Gemma3 27B would be good at, so it’s too bad you can’t use it. Older Gemmas may be good, but I’m not sure. Llama3.3 70B is also out, unless you have a decent amount of system RAM and are okay with offloading less than half to GPU. I could see it outperforming my recommendation below but I would be very surprised for the 8B version to outperform it. Older Qwen2.5 is decent at math but unless you grab QwQ doesn’t include thinking.





  • To be clear, I’m measuring the relative humidity of the air in the drybox at room temp (72 degrees Fahrenheit / 22 degrees Celsius), not of the filament directly. You can use a hygrometer to do this. I mostly use the hygrometer that comes bundled with my dryboxes (I use the PolyDryer and have several extra PolyDryer Boxes, but there are much cheaper options available) but you can buy a hygrometer for a few bucks or get a bluetooth / wifi / connected one for $15-$20 or so.

    If you put filament into a sealed box, it’ll generally - depending on the material - end up in equilibrium with the air. So the measurement you get right away will just show the humidity of the room, but if the filament and desiccant are both dry, it’ll drop; if the desiccant is dry and the filament is wet, it’ll still drop, but not as low.

    Note also that what counts as “wet” varies by material. For example, from what I’ve read, PLA can absorb up to 1% or so of its mass as moisture, PETG up to 0.2%, Nylon up to 7-8%… silica gel desiccant beads up to 40%. So when I say they’ll be in equilibrium, I’m referring to the percentage of what that material is capable of absorbing. It isn’t a linear relationship as far as I know, but if it were, that would mean that: if the humidity of the air is 10% and the max moisture the material could retain is 1%, then the material is currently retaining 0.1% moisture by mass. If my room’s humidity is kept at 40%, it’ll absorb moisture until it’s at 0.4% moisture by mass.

    That said, this doesn’t measure it perfectly, since while most filament materials absorb moisture from the air when the humidity is higher, they don’t release it as easily. Heating it both allows the air to hold more moisture and allows the filament (and desiccant) to release more moisture.


  • What have you done to clean the bed? From the link to the textured sheet, you should be cleaning it between every print - after it cools - with 90% IPA, and if you still have adhesion issues, you should clean it with warm water and a couple drops of dish soap.

    Has the TPU been dried? I don’t normally print with TPU but my understanding is that it needs to be lower humidity than PLA; I use dryboxes for PLA and target a humidity of 15% or lower and don’t use them if they raise above 20%. The recommendation I saw for TPU was to dry it for 7 hours at 70 degrees Celsius, to target 10% humidity (or at least under 20%) and to print directly from a drybox. Note that compared to other filaments, TPU can’t recover as well from having absorbed moisture - if the filament has gotten too wet, it’ll become too brittle if you dry it out as much as is needed. At that point you would need to start with a fresh roll, which would ideally go into a dryer and then drybox immediately.

    You should be able to set different settings for the initial layer to avoid stringing, i.e., slower speeds and longer retraction distance. It’s a bit more complicated but you can also configure the speed for a specific range of layers to be slower - i.e., setting it to slow down again once you get to the top of the print. For an example of that, see https://forum.prusa3d.com/forum/prusaslicer/bed-flinger-slower-y-movement-as-function-of-z/

    What’s the max speed you’re printing at? My understanding is that everything other than travel should all be the same speed at a given layer, and no higher than 25 mm/s. And with a bed slinger I wouldn’t recommend a much higher travel, either.

    In addition to a brim, have you tried adding supports?


  • stuck with the GPL forever

    If you accept a patch and don’t have the ability to relicense it, you can remove it and re-license the new codebase. You can even re-implement changes made by the patch in many cases, whether those changes are bug fixes or new features.

    If you re-implement the change, you do need to ensure this is done in a way that doesn’t cause it to become a derivative work, but it’s much easier if you have copyright to 99% of a work already and only need to re-implement 1% or so. If you’ve received substantial community contributions and the community is opposed to relicensing, it will be much harder to do so.

    A clean room implementation - where the person rewriting the code doesn’t look at the original code, and is only given a description of the functionality - which can include a detailed description of the algorithm - is the most defensible way to perform such a rewrite and relicense, but it’s not the only option.

    You should generally consult an attorney when relicensing and shouldn’t just do it casually. But a single patch certainly doesn’t mean you’re locked in forever.