• 36 Posts
  • 1.23K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle








  • Sure, but I think that the type of game is a pretty big input. Existing generative AI isn’t great at portraying a consistent figure in multiple poses and from multiple angles, which is something that many games are going to want to do.

    On the other hand, I’ve also played text-oriented interactive fiction where there’s a single illustration for each character. For that, it’d be a good match.

    AI-based speech synth isn’t as good as human voice acting, but it’s gotten pretty decent if you don’t need to be able to put lots of emotion into things. It’s not capable of, say, doing Transistor, which relied a lot on the voice acting. But it could be a very good choice to add new material for a character in an old game where the actor may not be around or who may have had their voice change.

    I’ve been very impressed with AI upscaling. I think that upscaling textures and other assets probably has a lot of potential to take advantage of higher resolution screens. Maybe one might need a bit of human intervention, but a factor of 2 increase is something that I’ve found that the software can do pretty well without much involvement.


  • What did you think of the new aiming system? I’ve heard mixed things, but it sounded good to me (or at least way better than a flat percentage).

    I don’t know what the internal mechanics are like, haven’t read material about it. From a user standpoint, I have just a list of positive and negative factors impacting my hit chance, so less information about my hit chance. I guess I’d vaguely prefer the percentage — I generally am not a huge fan of games that have the player rely on mechanics trying to hide the details of those mechanics — but it’s nice to know what inputs are present. It hasn’t been a huge factor to me one way or the other, honestly; I mean, I feel like I’ve got a solid-enough idea of roughly what the chances are.

    even if it doesn’t hit the same highs as JA2, there hasn’t really been much else that comes close and a more modern coat of polish would be welcome.

    Yeah, I don’t know of other things that have the strategic aspect. For the squad-based tactical turn-based combat, there are some options that I’ve liked playing in the past.

    While Wasteland 2 and Wasteland 3 aren’t quite the same thing — they’re closer to Fallout 1 and 2, as Wasteland 1 was a major inspiration for them — the squad-based, turn-based tactical combat system is somewhat similar, and if you’re hunting for games that have that, you might also enjoy that.

    I also played Silent Storm and enjoyed it, though it’s now pretty long in the tooth (well, so is Jagged Alliance 2…). Even more of a combat focus. Feels lower budget, slightly unfinished.

    And there’s X-Com. I didn’t like the new ones, which are glitzy, lots of time spent doing dramatic animations and stuff, but maybe I should go back and give them another chance.


  • I’d also add that ASCII has had some similar issues in the part, but that tends to have been ironed out by now via changes to onscreen typefaces.

    For example, some old typewriters don’t have a “0” key or a “1” key because capital-o and lowercase-l looked similar enough and context was sufficient to let them be used in place of the corresponding number. This trained some people to do that, to the point that various software adapted to permit misuse of one in the place of the other. To this day, I can open up Firefox, and the following webpage will render green text:

    <html><font color="#OOFFOO">green text
    </font></html>
    

    Some other fixes were were made over time, like making capital-i, lowercase-l, and the pipe (“I”, “l”, and “|”) as more-visually-distinct characters in typefaces where this matters.

    In the monospaced font world, “programming” or “coding” fonts, where not confusing the character in question is particularly important, place a premium on keeping characters like this particularly distinctive, even at the cost of trading off some aesthetic appeal or conforming to traditional typography or handwriting conventions for letters. You’ll get more-distinctive “.” and “,”, “O” and “0”, “l”, “I”, and “|”, “j” and “i”, etc.





  • tal@lemmy.todaytoProgramming@programming.devPNG is back!
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    1
    ·
    edit-2
    12 days ago

    PNG has terrible compression

    It’s fine if you’re using it for what it’s intended for, which is images with flat color or an ordered dither.

    It’s not great for compressing photographs, but then, that wasn’t what it was aimed at.

    Similarly, JPEG isn’t great at storing flat-color lossless images, which is PNG’s forte.

    Different tools for different jobs.


  • tal@lemmy.todaytoProgramming@programming.devPNG is back!
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    12 days ago

    At least at one point, GIF89a (animated GIF) support was universal among browsers, whereas animated PNG support was patchy. Could have changed.

    I’ve also seen “GIF” files served up online that are actually, internally, animated PNG files, so some may actually be animated PNGs. No idea why people do that.


  • tal@lemmy.todaytoProgramming@programming.devPNG is back!
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    11 days ago

    On the “better compression” front, I’d also add that I doubt that either PNG or WebP represent the pinnacle of image compression. IIRC from some years back, the best known general-purpose lossless compressors are neural-net based, and not fast.

    kagis

    https://fahaihi.github.io/NNLCB/

    These guys apparently ran a number of tests. They had a neural-net-based compressor named “NNCP” get their best compression ratio, beating out the also-neural-net-based PAC, which was the compressor I think I recall.

    The compression time for either was far longer than for traditional non-neural-net compressors like LZMA, with NNCP taking about 12 times as long as PAC and PAC taking about 127 times as long as LZMA.


  • tal@lemmy.todaytoProgramming@programming.devPNG is back!
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    11 days ago

    What’s next?

    I know you all immediately wondered, better compression?. We’re already working on that. And parallel encoding/decoding, too! Just like this update, we want to make sure we do it right.

    We expect the next PNG update (Fourth Edition) to be short. It will improve HDR & Standard Dynamic Range (SDR) interoperability. While we work on that, we’ll be researching compression updates for PNG Fifth Edition.

    One thing I’d like to see from image formats and libraries is better support for very high resolution images. Like, images where you’re zooming into and out of a very large, high-resolution image and probably only looking at a small part of the image at any given point.

    I was playing around with some high resolution images a bit back, and I was quite surprised to find how poor the situation is. Try viewing a very high resolution PNG in your favorite image-viewing program, and it’ll probably choke.

    • At least on Linux, it looks like the standard native image viewers don’t do a great job here, and as best I can tell, the norm is to use web-based viewers. These deal with poor image format support support for high resolutions by generating versions of the image at multiple pre-scaled levels and then slicing the image into tiles, saving each tile as a separate image, so that a web browser just pulls down a handful of appropriate tiles from a web server. Viewers and library APIs need to be able to work with the image without having to decode the whole image.

      gliv used to do very smooth GPU-accelerated panning and zooming — I’d like to be able to do the same for very high-resolution images, decoding and loading visible data into video memory as required.

    • The only image format I could find that seemed to do reasonably well was pyramidal TIFF.

    I would guess that better parallel encoding and decoding support is likely associated with solving this, since limiting the portion of the image that one needs to decode is probably necessary both for parallel decoding and for efficient high-resolution processing.