Claude 3 Opus by a wide margin. I'm a regular GPT-4 user who had tried Claude 2, and went into Claude 3 with muted expectations. I was shocked with how much more capable Claude 3 Opus was compared to GPT-4, it's not even close for my work (optimization, programming/algorithms). I asked 20 questions and Claude 3 solved all of them where GPT-4 failed on all of them. What's more surprising to me is that I don't remember GPT-4 being this bad, I was similarly impressed when GPT-4 was released. The disparity was significant enough for me to reconsider my subscription to OpenAI, but the Browsing capability as well as the Android app kept me. I use Opus for my work now though, by default, and fall back on GPT.
GPT4 has been significantly neutered. Personally I suspect it's a combination of model updates that aren't that good, but also resources being moved over to GPT5. I suspect there's a culture of "no roll-back".
If you go to the OpenAI playground and try out the original model, gpt-4-32k-0314, you'll see a dramatic difference in responses, especially for coding.
I fear that Claude will become neutered, it's already refusing a prompt that I was using a few days ago without issue, infact I can still go back to the old chat and continue it, but restarting it Claude refuses.