Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 38 Posts
  • 1.67K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • Game streaming serices are never going to catch on because the capital needed to build out the infrastructure is ridiculous.

    I don’t know about “never”, but I’ve made similar arguments on here predicated on the cost of building out the bandwidth — I don’t think that we’re likely going to get to the point any time soon where computers living in datacenters are a general-purpose replacement for non-mobile gaming, just because of the cost of building out the bandwidth from datacenter to monitor. Any benefit from having a remote GPU just doesn’t compare terribly well with the cost of having to effectively have a monitor-computer cable for every computer that might be used concurrently to the nearest datacenter.

    But…I can think of specific cases where they’re competitive.

    First, where power is your relevant constraint. If you’re using something like a cell phone or other battery-powered device, it’s a way to deal with power limitations. I mean, if you’re using even something like a laptop without wall power, you probably don’t have more than 100 Wh of battery power, absent USB-C and an external powerstation or something, due to airline restrictions on laptop battery size. If you want to be able to play a game for, say, 3 hours, then your power budget (not just for the GPU, but for everything) is something like 30W. You’re not going to beat that limit unless the restrictions on battery size go away (which…maybe they will, as I understand that there are some more-fire-safe battery chemistries out there).

    And cell phone battery restrictions are typically even harder, like, 20 Wh. That means that for three hours of gaming, your power budget because of size constraints on the phone is maybe about 6 watts.

    If you want power-intensive rendering on those platforms doing remote rendering is your only real option then.

    Second, there are (and could be more) video game genres where you need dynamically-generated images, but where latency isn’t really a constraint. Like, a first-person shooter has some real latency constraints. You need to get a frame back in a tightly bounded amount of time, and you have constraints on how many frames per second you need. But if you were dynamically-rendering images for, I don’t know, an otherwise-text-based adventure game, then the acceptable time required to get a new frame illustrating a given scene might expand to seconds. That drastically slashes the bandwidth required.

    What I don’t think is going to happen in the near future is “gaming PC/non-portable video game consoles get moved to the datacenter”.


  • I don’t know what the situation is for commercial games — I don’t know if there’s a marketplace like that — but I do remember someone setting up some repository for free/Creative Commons assets a while back.

    goes looking

    https://opengameart.org/

    It’s not highly-structured in the sense that someone can upload, say, a model in Format X and someone else can upload a patch against that model or something like that with improvements and changes, though. Like, it’s not quite a “GitHub of assets”.

    I haven’t looked at it over time, but I also don’t think that we’ve had an explosion in inter-compatible assets there. Like, it’s not like a community forms around a particular collection of chibi-style sprite artwork at a particular resolution, and then lots of libre games use those assets, the way RPGMaker or something has collections of compatible commercial assets.

    I’m sure that there must be some sort of commercial asset marketplace out there, probably a number, though I don’t know if any span all game asset types or if they permit easily republishing modifications. I know that I’ve occasionally stumbled across a website or two that have individuals sell 3D models.




  • I think that you have two factors here. GDC isn’t specific to PC gaming, and additionally, a lot of titles will see both PC and console releases.

    For a game that is intended to see only a PC release, my guess is that that that might affect system requirements of the game.

    For games that see console releases, things like “will fewer people have consoles” — because current-gen consoles are very unlikely to change spec, just price, is how this manifests itself. “Is the Playstation 6 going to be postponed” is a big deal if you were going to release a game for that hardware.




  • As it currently exists on other platforms, Gaming Copilot lets you ask guide-like questions about the game you’re currently playing. Microsoft’s official site offers an example question like “Can you remind me what materials I need to craft a sword in Minecraft?”

    I haven’t used consoles for a few generations, but historically, switching between a game and a Web browser on a console wasn’t all that great, and text entry wasn’t all that great. I dunno if things have improved, but it was definitely a pain in the neck to refer to a website in-game historically.

    On Linux, Wayland, I swap between fullscreen desktops when playing games, and often have a Web browser with information relevant to the game on another desktop. If it helps enable some approximation of a workflow like that for console players, that doesn’t sound unreasonable.

    There are other objections I’d have, like not really wanting someone logging what my voice sounds like or giving Microsoft even more data on me to profile with via my searches. But it sounds to me like the basic functionality has a point.


  • ping spikes

    If you’re talking about the game-server time, I’d want to isolate that to the WiFi network first, to be sure that it’s not something related to the router-ISP connection or a network issue even further out. You can do something like run mtr (which does repeated traceroutes) to see at what hop the latency starts increasing. Or leave ping running pinging your router’s IP address, the first hop you see if you run a traceroute or mtr. If it’s your WiFi connection, then the latency should be spiking specifically to your router, at the first hop, and you might see packet loss. If it’s an issue further down the network, then that’s where you’ll start seeing latency increase and packet loss.

    You might need to install mtr — I don’t know whether Fedora has it installed by default.

    Please help. I don’t want to go beneath house to run cat6. It’s dark and there are spiders.

    Honestly, I think that everyone should use wired Ethernet unless they need their device to be able to move around, as it maintains more-consistent and lower network latency, provides higher bandwidth, and keeps the Ethernet traffic off the air; 2.4 GHz is used for all sorts of other useful things, like gamepad controllers (I have a Logitech F710 that uses a proprietary 2.4GHz protocol, and at some point, when some other 2.4GHz device showed up, it caused loss of connectivity for a few seconds, which was immensely frustrating). And you have interference from stuff like microwaves and all that at the same frequency range. Avoids some security issues; we’ve had problems discovered with wireless protocols.

    But, all right. I won’t lecture. It’s your network.

    If you think that it’s Fedora and maybe your driver is at fault, one thing you might check is your kernel logs. If the driver is hitting some kind of problem and then recovering by resetting the interface, that might cause momentary drop-outs. After it happens, take a gander at $ journalctl -krb which will show your kernel log for the current boot in reverse order, with the most-recent stuff up top. If you have messages about your wireless driver, that’d be pretty suspicious.

    If the driver is at fault, I probably don’t have a magic fix, unless you want to try booting into an older kernel, which may still be viable; if you still have it installed and GRUB, the bootloader that Linux distros typically use, is set up to show your list of kernels at boot, then you can try choosing that older kernel and see if it works with your newer distro release and if the problem goes away. I don’t know if Fedora defaults to showing such a list or hides it behind some splash screen; it used to do so, but I haven’t used Fedora in quite some years. You might want to whack Shift or an arrow key during boot to get boot to stop at GRUB. If you discover that it’s a regression in the driver, I’d submit a bug report (“no problems with kernel version X, these messages and momentary loss of connectivity with kernel version Y”) which would probably help get it actually fixed in an update.

    You might also try just using a different wireless Ethernet interface, like a USB wireless Ethernet interface, and seeing if that magically makes it go away. Inexpensive USB interfaces are maybe $10 or $15. I’d probably look for some indication that it’s a driver problem before doing that.


  • I care less about speakerphone than I do Bluetooth headsets or regular phone speaker use near me.

    The speakerphone makes more noise!

    Yes, but people already have conversations between each other in public where we can hear both sides. We train ourselves to tune those out. A speakerphone is analogous to that case of another human talking.

    What I find most disruptive about phone conversations near me versus listening to two other people talking (which I can tune out) is that the speech pattern of a phone user is to say something and then pause. The problem is that that is exactly the signal that someone has said something to you, and that your attention is required. I have a harder time ignoring those one-sided conversations than turning out a conversation where I can hear both sides, because it’s basically constantly giving my head the “you just missed something and need to respond” signal. It’s like when someone says something to you, waits for a few seconds, and then your attention gets triggered and you look up and say “what?”

    Now, the article does also reference someone turning a speakerphone way up, and that I can get, if you’re playing it louder than a human would speak. But that’s also kinda a special case.

    I think that in general, the best practice is to text, and I think that most would agree that that’s uncontroversially the best approach in public. But after that, I’d personally prefer to have speakerphone use, above headset or regular phone use.

    EDIT: One interesting approach — I mean, smartphone vendors would always like to have new reasons to sell more hardware, so if they can figure out how to make it work, they might jump on it — might be phones capable of picking up subvocalization.

    https://en.wikipedia.org/wiki/Subvocalization

    Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read.[1][2] This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.[3]

    This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading.[3]

    You’d probably also need some sort of speech synthesizer rig capable of converting that into speech.

    A conversation where someone’s using headphones/earbuds and a subvocalization-pickup phone would avoid some of the limitations of texting (not limited to text input speed on an on-screen keyboard or having to look at the display), provide for more privacy for phone users, and not add to sound pollution affecting other people in the environment.

    EDIT2: Other possibilities for the speaker side:

    Bone conduction

    This has actually been done, but has some limitations on the sound it can produce, and you need to have a device in contact with your head.

    https://en.wikipedia.org/wiki/Bone_conduction

    Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content even if the ear canal is blocked. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to the sound being conveyed through the air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing—as with bone-conduction headphones—or as a treatment option for certain types of hearing impairment. Bones are generally more effective at transmitting lower-frequency sounds compared to higher-frequency sounds.

    The Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user’s ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.[47]

    Phase-array speakers to produce directional sound

    Here, you need to have the device track its position and orientation relative to a given user’s ears, then have a phase array of speakers that each play the sound at just the right phase offset to produce constructive interference in the direction of the user’s ears — it’s beamforming with sound. Other users will have a hard time hearing the sound, which will be garbled and quieter, because of destructive interference in their direction.

    https://en.wikipedia.org/wiki/Beamforming

    Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception.[1] This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.

    We more-frequently use this for reception than for transmission, with microphone arrays, but you can make use of it for transmission. You’ll need a minimum number of speakers in the array to be able to play beams of sound with constructive interference in the direction of a given number of listeners.


  • I don’t presently need to use any service that requires use of a smartphone. I’ve never had a smartphone tied to a Google/Apple account. I don’t even think that I currently have any apps from the Google Store on my phone — just open-source F-Droid stuff.

    It’s true that hypothetically, you could depend on a service that does require you to use an Android or iOS app to make use of it. There are services that do require that there. Lyft, for example, looks like it requires use of an app, though Uber doesn’t appear to do so. And I can’t speak as to your specific situation, but at least where I am, in the US, I’ve never needed to use an Android or iOS app to make use of some class of service.

    But I will say that services will track what people use, and if people are continuing to use other interfaces than smartphone apps to make use of their services, that makes it more likely that that’s what they’ll provide.

    I can’t promise that somewhere in the world, or in some country or city or specific place, someone might be required to use an Android or iOS app, or if not now, down the line, and not have an alternative. They can, at least, limit their use to that app, rather than using it more-broadly. I don’t make zero use of my smartphone software now — like, when I’m driving, I’ll use the open-source OSMAnd to navigate. I sometimes check for Lemmy updates when waiting in line or similar. I don’t normally listen to music while just walking around, but if I did, I’d use a music player on the phone rather than a laptop for it. But I try to shift my usage to the laptop as much as is practical.


  • I don’t intend to get rid of my smartphone, but I do carry a larger device with me, and try to use the phone increasingly as just a dumbphone and cell modem for that device to tether to.

    That may not be viable for everyone — it’s not a great solution to “I’m standing in line and want to use a small device one-handed”. And iOS/Android smartphones are heavily optimized to use very little power, and any other devices mean more power. It probably means carrying a larger case/bag/backpack of some sort with you. And most phone software is designed to know about and be aware of cell network constraints, like acting differently based on whether you’re connected to a cell network for data or a WiFi network for data.

    However, it doesn’t require shifting to a new phone ecosystem. It also makes any such future transition easier — if I have a lot of experience tied up in Android/iOS smartphone software, then there’s a fair bit of lock-in, since shifting to another platform means throwing out a lot of experience in that phone software. If my phone is just a dumbphone and a cell modem, then it’s pretty easy to switch.

    And it’s got some other pleasant perks. Phone OSes tend to be relatively-limited environments. They’re fine for content consumption, like watching YouTube or something, but they’re considerably less-capable in a wide range of software areas than desktop OSes. A smartphone has limited cooling; laptops are significantly more-able to deal with heat. Due to very limited physical space, smartphones usually have very few external connectors — you probably get only a single USB-C connector, and no on-phone headphones jack. You’re probably looking at a USB hub or adapters and rigging up pass-through power if you want anything else. Laptops normally have a variety of USB connectors, a headphones jack, maybe a wired Ethernet connector, maybe an external display jack. Laptops tend to have a larger battery, so it’s reasonable to use the laptop to power external devices like trackballs/larger trackpads, keyboards, etc. You get a larger display, so you don’t have to deal with the workarounds that smartphones have to do to make their small screens as usable as possible. You don’t have to deal with the space constraints that make a touchscreen necessary, having your fingers in front of whatever you’re looking at (though you can get larger devices that do have touchscreens, if you want). You have far more choices on hardware, and that hardware is more-customizable (in part because the hardware likely isn’t an SoC, though you can get an SoC-based laptop if you want). Software support isn’t a smartphone-style “N years, tied to the phone hardware vendor, at which point you either use insecure software or throw the phone out and buy a new one”.




  • I never really got into the Assassin’s Creed series, but I did enjoy Saboteur, which I understand is somewhat similar, albeit getting a little long in the tooth these days. I don’t think that there are going to be any new games in that series, though. Users might consider taking a glance at it.

    On another note…the live service elements going in also highlights one major concern I have with games purchased on platforms like Steam or on console download services or whatever. Publishers can push updates. So, normally you sell a game once, and there’s no future revenue from it. But…if you go out of business or just want to sell the rights, you can sell it to someone else, who now has the ability to push updates to the software to the computers of people who own the game, and can include, say, ads, data-harvesting, live-service stuff, microtransactions, or whatever else might generate money.

    Traditionally, that’s not how games worked. A player buys a game on physical media, he can always use that game. It won’t be worse in the future.


  • Yeah, there’s some nuclear power plant here in the US that uses sewage for cooling. It’s out in the middle of the desert, Arizona or New Mexico or something, somewhere where it’d be a pain to bring in a bunch more water.

    searches

    Arizona.

    https://en.wikipedia.org/wiki/Palo_Verde_Nuclear_Generating_Station

    The Palo Verde Generating Station is a nuclear power plant located near Tonopah, Arizona[5] about 45 miles (72 km) west of downtown Phoenix. Palo Verde generates the second most electricity of any power plant in the United States per year, and is the second largest power plant by net generation as of 2021.[6] Palo Verde has the third-highest rated capacity of any U.S power plant. It is a critical asset to the Southwest, generating approximately 32 million megawatt-hours annually.

    At its location in the Arizona desert, Palo Verde is the only nuclear generating facility in the world that is not located adjacent to a large body of above-ground water. The facility evaporates water from the treated sewage of several nearby municipalities to meet its cooling needs. Up to 26 billion US gallons (~100,000,000 m³) of treated water are evaporated each year.[12][13] This water represents about 25% of the annual overdraft of the Arizona Department of Water Resources Phoenix Active Management Area.[14] At the nuclear plant site, the wastewater is further treated and stored in an 85-acre (34 ha) reservoir and a 45-acre (18 ha) reservoir for use in the plant’s wet cooling towers.


  • New York City is a port city. It has an effectively infinite supply of salt water, which you can use for evaporative cooling, albeit with some extra complications.

    EDIT: Hell, you can use the waste energy from an evaporative cooler to drive a distiller to generate fresh water from some of the evaporated salt water, if you want. Microsoft is doing that combined datacenter-nuclear-power-plant thing. IIRC, if I’m not combining two different cases of an AI datacenter using full output of a power plant, they have the entire output of a nuclear power plant never touching the grid (and thus avoiding any transmission cost overhead and as a bonus, avoiding regulatory requirements attached to transmission and distribution from power generation):

    https://arstechnica.com/ai/2024/09/re-opened-three-mile-island-will-power-ai-data-centers-under-new-deal/

    Re-opened Three Mile Island will power AI data centers under new deal

    Microsoft would claim all of the nuclear plant’s power generation for at least 20 years.

    From past reading, desalination from reverse osmosis has wound up being somewhat cheaper than via using distillation, but combined generation-distillation using waste heat is a thing. IIRC Spain has some company that does combined generation-distillation facilities.

    And in a case like that, you have the waste heat from generation and the waste heat from use all in one spot, so you’ve got a lot of water vapor to condense.

    EDIT2: Yeah, apparently distillation used to be ahead for desalination, but reverse osmosis processes improved, and currently hold the lead:

    https://www.sciencedirect.com/science/article/pii/S1359431124026292

    As desalination is a process of removing dissolved solids such as salts and minerals from water, there are two main types of technology commonly used in the industry: thermal-based and membrane-based [22]. The thermal-based desalination processes, such as multi-stage flash distillation (MSF) and multiple-effect distillation (MED) were once predominantly used in the water sector until membrane-based desalination technology, such as reverse osmosis (RO), matured and offered lower operating costs [23]. Hence, RO is the most used desalination process today, producing between 61 % and 69 % of the total global desalinated water, followed by MSF (between 17 % and 26 %) and MED (between 7 % and 8 %) [9], [16], [19], [20], [24].