> curious to which fps this is? painkiller? ut? quake? reflex? diabotical?
Hah, that description kind of reminds me of Gunz, where some bugs in the game lead to players being able to climb walls, basically fly around and attack rapidly while moving.
Not that painkiller but eSport as pain killer worked for me. Can't sleep because of a teeth and no pain killer drug at home? Lap for an hour at some circuit on my PS and fall asleep at least a little bit. Then pay a visit to the dentist :-)
Maybe this is somewhat related to the subject of the post: detach consciousness from what I am doing and the pain gets a little detached too. However I doubt it would work with serious levels of pain.
It does, there is some very promessing studies on patient that suffer massive body burn ( most of their skin is gone )
The traditional approch is pain killers and some meditation.
VR helps a lot.
On two front : it distract your brain and it can teach you how to reach that distracted from the pain state by yourself.
My understanding is that a large pourcentage of the neurons are specialized in image processing. And that keeping those busy with exploring a VR world avoid to have those neurons re-assign to ponder about pain.
strafe jumping in quake is far from trivial also various weapon aided jumps are required to play at the highest level, ut also has its own dodge jump mechanics and shield jumps.
https://www.youtube.com/watch?v=A1MJPMl8B5Y
being unable to execute this during a match like a duel means being at a massive disadvantage.
example of good strafe jumping coupled with a plasma jump to secure map/item control and pressure the opponent on the back-foot.
https://youtu.be/GFTmYD95-cQ?t=1592
Compared to an RTS, yes. Compared to tactical shooters like CS, no. Once you reach a certain level of aim, movement and the associated mechanics become the main distinguishing factor along with using the sounds in the game to track your opponents position and conceal yours.
And I'm 95% confident that surf maps were one of the inspirations behind Titanfall. Here's a completely broken tool-assisted speedrun of the obstacle course: https://www.youtube.com/watch?v=yXtggqe6oo0
Location: Chicago
Remote: Yes
Willing to relocate: Yes
Technologies: Python, C, C++, Matlab, SQL
Résumé/CV: Available upon request
Email: rckvalde@gmail.com
Summary: Bs Physics, some finance graduate courses. Most of my programming experience is from computational physics and financial data. Have some knowledge of circuits and gates from physics labs and EE electives. Most of my self studying has been finance related, Garch, VaR, stochastic, derivative pricing, market microstructure and machine learning (murphy, bishop)
Recently did freelance work for a portfolio manager at a family office that allocates to hedge funds.
I'm eager to learn and receive mentorship from bright industry people while improving my programing and math abilities.
Hey, I just wanted to thank you for your reply covering the two semesters of stochastics in a condensed reply nearly 11 months ago in a different thread. I got too distracted reading up on it I forgot to reply. I cant tell you how much that helped with putting it all together conceptually. Given there is no way to reply to it now or dm you I figured this was the next best thing. Also anyway to get in contact with you with any future questions outside of here? especially since you mentioned you were previously working on an AI project. Any thoughts on High-Dimensional Statistics: A Non-Asymptotic Viewpoint by A. Martin Wainwright? It's currently what I'm trudging along through. You wouldn't happen to have a graycatmathhelp@email would you? thanks again
Hey, I think I meet these requirements, I have degree in physics, have some computational physics experience from classes doing some simple parallelization of nonlinear system (bug population) but have had a hard time bridging the gap to breaking into tech. I do have an idea and having a mentor would be very helpful, I was wondering where I could dm you. Thank you
ive been teaching myself stochastics, i personally find it really interesting. i really like Peter Medvegyev book Stochastic Integration Theory, do you have any thoughts on it? you really did a good job at summarizing the bigger picture in a few words, really helped, so thank you. any additional recommended resources that ties things together like this?
edit: also liked the write up yugeten, was helpful
To get very far in stochastic processes, usually need a good background in measure theory. Here is an overview of about two semesters in a fast, often rare, grad course plus much more:
The integral of freshman calculus is the Riemann integral. Too soon/easily it was seen to be clumsy to inadequate in theoretical developments. Near 1900 in France, E. Borel student H. Lebesgue improved the situation; he did his Lebesgue integral. Don't worry: For every function integrated in freshman calculus, Lebesgue's integral gives the same answer. Otherwise Lebesgue's integral is much more general and, it turns out, is the solid foundation of probability and stochastic processes.
Lebesgue's work starts with measure which is just a notion of area. So, yes, on the real line, the Lebesgue measure of the closed interval [0,1] is 1, just the same as we learned in grade school. But with Lebesgue's work, it's tough to think of a subset of the real line that is not measurable (has a measure or area in Lebesgue's work) -- the usual example of a subset of R that is not measurable needs the axiom of choice. With measure defined on R, Lebesgue can define the Lebesgue integral of functions
f: R --> R
For this we need a meager assumption (which essentially always holds), but lets charge on:
For the generalizations, crucial for probability and stochastic processes, start with a non-empty set M. R is an example of a such a set M. Define a measure on M, say, m. Then for some subsets A of M, we have the measure of set A as m(A). That is, we have the area (volume, whatever) of A. But to do much, we need more assumptions: We want the set of all subsets A of M to which we assign measure m(A) to be a sigma algebra, say, F. As at
we want M to be an element of F, for each A in F we want the complement, say, the set M - A (obvious but sloppy notation) to be an element of F, and for A_i in F for i = 1, 2, ..., we want the union of the A_i to be an element of F. So, we want F closed under complements and countable unions. No, we don't permit uncountable unions, a point that keeps popping up in stochastic processes. Permitting uncountable unions results in, in one word, a mess.
The sigma part of sigma algebra is supposed to refer to the countable unions as in the capital letter sigma used in calculus, etc., for countable sums.
So, we have the set M, the sigma algebra of subset F, and the measure m. Then the pair (M, F) is a measurable space and the triple (M, F, m) is a measure space. In probability theory and stochastic processes, the measure space used is usually denoted by space, capital omega, the sigma algebra, script F, and the measure, P.
Right -- I just let the cat out of the bag: Probability P, as we saw in elementary treatments of probability, is a measure on the measure space with Omega, script F, and P.
So for A a subset of the space Omega and an element of sigma algebra script F, P(A) is the probability of set A. Set A is called an event. Now we are back to what you saw in elementary treatments of probability.
So, it went too fast -- we mentioned measure theory with (M, F, m) and then took a special case Omega, script F, P, a probability space, and got back to P(A) we saw in elementary treatments.
This jump from a measure space to a probability space was made by A. Kolmogorov in 1933 and has been the foundation of 99 44/100% (as in Ivory soap!) of advanced work in probability and stochastic processes since then.
So, back to the case of measure space (M, F, m). Suppose we have a function
f: M --> R
We want to integrate f as in freshman calculus. Well, for each subset A of M and an element of sigma algebra F, we have the measure (area) of A, that is, m(A). Well with one more assumption, and what Lebesgue did, that's enough! And it's assuming less than Riemann did and, thus, getting more generality in some quite good ways.
Then for the case of Lebesgue integrating some
f: R --> R
compared with the Riemann integral, Lebesgue partitions on the Y axis and Riemann partitions on the X axis. So, Riemann partitions on the domain of the function and Lebesgue partitions on the range. In both cases we are working with lots of little rectangles that get smaller as we converge. So, Lebesgue lets the domain be much more general, e.g., enough for probability theory.
So, back to a probability space and a function
f: Omega --> R
With the meager assumption (that f is measurable -- to be brief I'm skipping over the details here is only because nearly all functions are measurable -- tough to construct one that isn't, so general is Lebesgue's work), then f is a random variable. It is more common to denote random variables by X, Y, Z, etc. Now, finally, maybe after getting lost in a swamp for some years, you have a solid definition of a random variable. Then each point in Omega is one probability or statistical trial.
Then, sit down for this amazing fact: The expectation of real valued random variable X, denoted by E[X], is just the Lebesgue integral of the function
X: Omega --> R
So, in this way, Kolmogorov borrowed from Lebesgue and got a solid foundation for probability and stochastic processes.
Now you know why Lebesgue's measure theory is key to advanced work in probability and stochastic processes -- E[X] is just Lebesgue's integral generalization of the calculus Riemann integral.
Long standard very careful, polished, treatments of measure theory are in Royden, Real Analysis and the first half of Rudin, Real and Complex Analysis. Continuing with probability is Breiman, Probability, Neveu, Mathematical Foundations of the Calculus of Probability,
Loeve, Probability, etc. Uh, Loeve was long at Berkeley, and both Breiman and Neveu were Loeve students.
Can use this work to make good sense out of Brownian motion, and there are plenty of books there.
And can do more on stochastic processes, e.g.,
Ioannis Karatzas and Steven E. Shreve,
Brownian Motion and Stochastic Calculus.
R. S. Lipster and
A. N. Shiryayev,
Statistics of Random Processes.
Ronald K. Getoor,
Markov Processes and Potential
Theory.
This material is now decades old. With this material it is possible to give high end treatments of some topics in Wall Street type investing, e.g., exotic options and in particular the Black-Scholes formula.
This material was called by Breiman "graduate probability" as he left the field and did, e.g., CART -- classification and regression trees, maybe the most important seed for the current work in machine learning. Breiman was interested in making sense out of some complicated medical data.
The potential of such graduate probability for careers in academics and/or applications seems now to be open to question. But, with a larger view, probability and stochastic processes just will NOT go away and will remain some of the most powerful tools for making sense out of reality -- we can be sure that over the horizon we will be glad to have such a solid foundation and WILL encounter important, new applications, ours or those of others.
There is an old point: Long it was common for the pure mathematicians, yes, to pay close attention to measure theory but essentially to ignore the role in probability theory, saying, that probability theory was just a trivial special case of measure theory for positive measures (can also have measures that have negative or complex values) where the measure of the whole space was 1.
But: Probability theory does a LOT with the assumption of independence tough to find in texts in just measure theory. Also the Radon-Nikodym theorem (an amazing generalization of the fundamental theorem of calculus) IS treated in measure theory but the applications crucial in probability and stochastic processes to conditioning and sufficient statistics usually are not.
So, with this overview, you might find Royden, Rudin, Breiman, Neveu (especially elegant) not so hard reading and a lot of fun.
You will learn about the cases of convergence of sequences of random variables and the classic results of the central limit theorem, the laws, weak and strong, of large numbers, ergodic theory (pour cream into coffee and stir until the cream returns to the distribution it had in the coffee just after pour it in -- Poincare recurrence), and martingales. Margingales are amazing and provide the easiest proof of the strong law of large numbers and some of the strongest inequalities in math.
So, amazing stuff: Go into a lab, measure something, walk out, and have the value of a random variable -- okay, that's good enough for an elementary course. But typically in math, we can't calculate just what we want in just one step so have to iterate and approximate and hopefully converge in some appropriate sense to just what we want. Well, in those classic results, we have notions of convergence. The most amazing case is of strong convergence where we have a sequence of random variables that converge to some random variable. From the elementary treatments, having a sequence of randomness converge to more randomness seems absurd. But with Kolmogorov's treatment, a random variable is just a function, and from math such as Rudin, Principles of Mathematical Analysis we know quite a lot about convergence, and in measure theory we learn more. Then the case of a sequence of random variables converging to another random variable is just a special case of one case of convergence of sequences of functions. So, we can approximate a random variable X by having a sequence of random variables converge to random variable X. How 'bout that! Thank you Borel, Lebesgue, Kolmogorov, von Neumann, etc.