Are you installing needed libraries?
For example, the installer runs because it doesn’t need any, but then your app needs say VCRedist 2010, and so won’t until run until you add the vcrun2010
extra library with Winetricks or the menu in Bottles.
Are you installing needed libraries?
For example, the installer runs because it doesn’t need any, but then your app needs say VCRedist 2010, and so won’t until run until you add the vcrun2010
extra library with Winetricks or the menu in Bottles.
What happens next? A wave of even worse disregard for things.
After all, if we can bring back the mammoth, who cares if we off <insert species here>, they’ll just bring it back next rotation. /s
Some would think this is horrible, but to me, it would be wholly dependent on the title/what was bought and sold.
Nothing in this world is free. Development, servers, character licensing, it all costs money and if those costs aren’t passed down, you’ll never afford to continue. So for a game, especially one with online content or continuing content, to be free to play, money has to come from somewhere.
Where the road splits is what is being sold. Things that give an edge in the game, pay-to-win? Uninstalled. Time limited FOMO triggers? Disgusting. Random loot boxes? Begone foul spirit.
On the other end, if all that is for sale is shiny baubles and trinkets, things no one needs but can have as a reward for “supporting development”? I’m cool with that. If I feel no requirement to pay up, it’s being handled right, and if I like they game, sure, I can part with a fiver to look like I’m dipped in gold or whatever the supporter pack adds to help them keep the lights on(at least until I get bored of it in a week or two and switch back :P).
I’d be curious what the divide is between the two kinds of purchases are. I’m sure I’ll be disappointed to find it was mostly P2W scum, though.
Forgive me, I’m no AI expert to fully compare the needed tokens per second measurement to relate to the average query Siri might handle, but I will say this:
Even in your article, only the largest model ran at 8/tps, others ran much faster, and none of these were optimized for a task, just benchmarking.
Would it be impossible for Apple to be running an optimized model specific to expected mobile tasks, and leverage their own hardware more efficiently than we can, to meet their needs?
I imagine they cut out most worldly knowledge etc/use a lightweight model, which is why there is still a need to link to ChatGPT or Apple for some requests, would this let them trim Siri down to perform well enough on phones for most requests? They also advertised launching AI on M1-2 chip devices, which are not M3-Max either…
Onboard AI chips will allow this to be local.
Phones do not have the power to ~~~
Perhaps this is why these features will only be available on iPhone 15 Pro/Max and newer? Gotta have those latest and greatest chips.
It will be fun to see how it all shakes out. If the AI can’t run most queries on the phone with all this advertising of local processing…there’ll be one hell of a lawsuit coming up.
EDIT: Finished looking for what I thought I remembered…
Additionally, Siri has been locally processed since iOS 15.
https://www.macrumors.com/how-to/use-on-device-siri-iphone-ipad/
I think there’s a larger picture at play here that is being missed.
Getting the weather is a standard feature for years now. Nothing AI about it.
What is “AI” is, Hey Siri, what is the weather at my daughter’s recital coming up?
The AI processing, calculated on-device if what they claim is true, is:
Well {Your phone contact name}, it looks like it will {remote weather response} during your {calendar event from phone} with {daughter from contacts} on {event date}.
That is the idea between on-device and cloud processing. The phone already has your contacts and calendar and does that work offline rather than educating an online server about your family, events and location, and requests the bare minimum from the internet, in this case nothing more than if you opened the weather app yourself and put in a zip code.
Genuine curiosity…what are some proposed solutions we think Valve can implement to solve this crisis?
I ask because the line about VAC being a joke gave me a thought…VAC is such a joke because it is so simple and un-invasive. Do we really want VAC “upgraded” to the level of more effective Anti-cheats, where it cuts down the bots but is now a monitoring kernel service? Just a few weeks ago people were in an uproar about the new Vanguard anti-cheat…do we want that for Valve? Or do we think they can do it a better way?
As an aside, honestly in my mind community servers with a cooperative ban list plugin might be the most effective solution of all…it would still be a game of whack a mole since they can always churn out new accounts, but that’s what gives me pause about other solutions because the only real solutions to slow cheaters start to sound like charging for the game(to make account creation costly) or implementing a bulletproof system of hardware bans, which means invasive solutions that can be certain they aren’t virtual machines or such.
Like a non-profit, with tax breaks and the ability to earn enough to operate, but little more than that or the taxes come back with a vengeance.
Everything needs money to run but when there’s the option to shovel out whatever bait it takes to chase the dragon of uncapped earnings, they’re not in it to keep us informed, just to keep us spending.
I use Minio https://github.com/minio/minio, though there are others. As someone who never worked with s3 before but wanted to try it and use it with some apps that supported s3 as a storage target, it’s been working fine for me though I’m certainly not using it to its potential. Has web access and all.
Others might be:
I’ll take a compromise where “3.1” is etched in each head end, and I can trust that “3.1” means something, and start with that.
The real crux of the issue is that there is no way to identify the ability of a port or cable without trying it, and even if labeled there is/was too much freedom to simply deviate and escape spec.
I grabbed a cable from my box to use with my docking station. Short length, hefty girth, firm head ends, certainly felt like a featured video/data/Dock cable…it did not work. I did work with my numpad/USB-A port bus thing though, so it had some data ability(did not test if it was 2.0 or 3.0). The cable that DID work with my docking station was actually a much thinner, weaker feeling one from a portable monitor I also had. So you can’t even judge by wiring density.
And now we have companies using the port to deviate from spec completely, like the Raspberry Pi 5 technically using USB-C, but at a power level unsupported by spec. Or my video glasses that use USB-C connections all over, with a proprietary design that ensures only their products work together.
Universal appearance, non-universal function, universal confusion.
I hate it. At least with HDMI, RCA, 3.5mm, Micro-USB…I could readily identify what a port and plug was good for, and 99/100 the unknown origin random wires I had in a box worked just fine.
Actually, that leads me to another point:
One upon a time, the concept behind a universal USB-C connector was so we could do exactly that.
Laptop? Phone? Camera? America? Germany? Japan? Power? Connect the to TV? Internet?
Wouldn’t matter anymore. USB-C to cover it all. Voltage high for the laptop, low for the camera, all available just the same in every country, universal. So yes, fill the airports and hotels with them. Use them for power and to play videos on the TV. Because we weren’t supposed to have to question the voltage or abilities of the ports and cables in use.
Did/will that future materialize?
I feel the only place for a €1 cable is met by those USB-A to C cables that you get with things for 5V charging. That’s it. And it’s very obvious what the limits on those are by the A plug end.
Anything that wants to be USB-C on both ends should be fully compatible with a marked spec, not indistinguishable from a 5V junk wire or freely cherry picking what they feel like paying for.
Simply marking on the cable itself what generation it’s good for would be a nice start, but the real issue is the cherry picking. The generation numbers don’t cover a wire that does maximum everything except video. Or a proprietary setup that does power transfer in excess of spec(Dell, Raspberry Pi 5). But they all have the same ends and lack of demarcation, leading to the confusion.
The worst part is, I could accept that as a generational flaw. The newer ones get better, the olds ones lying around do less. OK, that’s the beast of progress.
But no. They still make cables today that do power only. They still do cables that do everything except video. Why? Save a couple cents. Make dollars off multiple product lines, etc. Money.
What could have been the cable to end all cables…just continued to make USB a laughing stock of confusion.
Don’t even get me started on the device side implementations…
That or they do prescription inserts, or just sell you the computer and you get other glasses that do, like Viture https://www.viture.com/
As someone with video glasses like those included here, it might be a step forward but it has a lot of room for improvement before it will survive mass market.
For starters, unlike a screen, these glasses must be tailored to your eyesight. If you wear prescription, you will need to fit double glasses or have some ability for the video ones to be prescription. And a huge problem in the market right now is pupil distance, or eye spacing/head size. Mass market wants one-size-fits all, but that means those outside the designed size will have difficulty using them if they can at all.
These are problems currently experienced with the current market like Rokid, XReal, and Viture.
And then of course there’s power, if we keep to 1080p we’ll need more computing power and battery than a Steam Deck screen, which some handhelds might be able to accommodate, maybe more so depending on the weight and shape trades of the new style. But so far it might be disappointing, especially if it has the appearance of a huge screen and still needs to low-res upscale/FSR to meet performance.
Just my thoughts. Still cool, but no confidence in it as a winner yet.
To be honest, I stopped being a “qualified player” a few years ago. Nowadays I load up a nice long Survival round, usually against Infested to chew on, with whatever Frame I’ve forgotten how to play, to enjoy the loop without stress. So I’m not in it to farm all the stuff either. Or, I’ll play the story quest if a new one is out, since they are pretty well scaled for solo play and/or give you what you need.
Other than that I just can’t compete. I tripped some time ago and didn’t keep up with the latest meta builds, so now I struggle to have the things “required” to effectively participate in public sessions or the latest missions. And don’t even get me started on Rivens, Shards, Liches or whatever.
If I join a Zariman round I’ll probably die. Not as much now that I have Titania, but I’m also not clearing rooms in a single volley like everyone else.
I’m a filthy casual and I still find a way to have fun, so there’s something there worth keeping.
Ah, but would you keep Workshop access?
If so, Garry’s Mod is almost cheating. There’s a bit of everything in it.
So that and Warframe. I picked up WF ten years ago and it’s still in my top ten recently played games. Though I have a love/hate relationship with the Metagame it turned into, it still remains.
I often compare Natural Selection to Survivorship Bias, because as far as I can tell that is what it is.
There is no “drive” or mythical force to be better. A mutation occurs and the result works or doesn’t.
Those that work have survived until today, and those that don’t failed to reproduce sufficiently to reach today.
That said, today we actually have what I call “Un-natural Selection”, and that is when we humans take something that would have failed naturally and ensure its success through our intervention. Think seedless plants or humans/animals with chronic disabilities. Natural selection would likely have eliminated them for failure to function or reproduce, but through our will they endure. For now.
I wouldn’t say it’s only Critical, LTSC still gets average security fixes. They don’t get Feature updates, but they still get Security updates, is how it’s normally put. And it’s not as bad as it sounds. Even as a gamer stability is a good thing, and there are plenty of third party softwares for any desirable “features” that get delayed or skipped. If LTSC gets any fewer security updates it’s because it has less built in crap to need updating.
I’ve never needed funny graphics in my taskbar search bar or Bing in my start menu or the Edge bar or whatever it was that now clutters my friend’s task bars as of the last Feature update. But I still get my security fixes and Defender definitions every Patch Tuesday.
But the trick is getting a copy, true.
One thing I can think of is an overzealous corporate security solution blocking or holding back your email purely for having an attachment, or because it misunderstands/presumes the cipher-looking text file to be an attempt to bypass filtering.
Other than that might be curious questions from curious receivers of the key/file they may not understand, and will not be expecting. (“What’s this for? Is this part of the contract documents? Oh well, I’ll forward it to the client anyway”)
Other than that it’s a public key, go for it. Hard (for me anyway) to decide to post them to public keychains when the bot-nets read them for spam, so this might be the next best thing?