MSFT Realizes That Some Things Need to Be Changed
There are lots of directions in which various people think the field of 'identity' is heading. While some of these directions get accepted over a time period (like Zero Trust), someone always gets stuck with having to make the tools that enable a direction's implementation to occur.
Conferences like this week's Identiverse in Washington, DC are full of directions that various people think the field of "identity" is heading. While some of these directions get accepted over a time period (like Zero Trust), someone always gets stuck with having to make the tools that enable a direction's implementation to occur.
Maria Puertas Calvo is a senior data scientist with Microsoft, and it seems to be her turn in the barrel of data. What she has ended up doing is shining a bit of transparency on what MSFT is doing in a particular area as far as the security-enabling technology goes.
She gave a talk about "Behavioral Analytics for Identity Compromise Detection in Real-Time" at the conference where she outlined the kinds of methods MSFT is now using for improved user authentication. The starting point for previous authentication was a model that really only had limited parameters available to it for making an off-loaded authentication decision. It either passed the user through the gate, or failed them. The decision to do that was based on the presence of some particular factor that may or may not be sticky to the user that is presenting right now.
What MSFT now does is a retrograde analysis of the user account that is presenting for authorization. The previous history of this user's accesses can be compared to the current situation.
This is what MSFT calls "Behavior Analytics." It has nothing to do with CAPTCHA or biometric sensors. It has everything to do with what the user has done before with this authorization point. The devices used, the network used, frequency of past visits, the areas previously authorized are the kinds of factors that are looked at. An authorization request can covered to an "output score" that can be constructed noting the presence, or non-presence, of the constituent factors. Something that tends to confirm the user's habits may add to the score, but something that differs from what is known about the user can also delete from that score.
Calvo says that, "It's calibrated to a probability of compromise based on known attack and legitimate authentication data." This is much more customizable than the original methods that assumed everything was a medium-level risk. With this kind of model, enterprises can set score levels that reflects their particular security posture. Some may want easier entry to a resource and tough entry to a different one. This flexible way allows that kind of choice to occur.
MSFT found that its possible to do this kind of "compromised risk assessment" inline with the authentication. Latency did not seem to be a problem for them. They got real-time performance out of the combination.
Calvo even went so far as to say the new approach increased authentication result performance by 35%.
If it does a task better and increases how efficiently it does that task by a third, then this kind of direction seems a winner.
— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.
Read more about:
Security NowAbout the Author
You May Also Like