[ad_1]
Switch on your streaming service of choice or open up the website for your preferred department store and a recommendation system is sure to kick in.
“You liked this TV series, so we think you’ll like this one!” Or: “As you’re looking at a pink linen skirt, think about buying these cream espadrilles to go with it!” They are key commerce drivers because they help customers see the products that they’re most likely to purchase. But they don’t fit neatly into existing machine-learning toolchains.
Some of the best-known recommendation engines are for content. YouTube’s eerie sense of what you might like to watch next is one example, and the ultimate champion of this game is TikTok: It’s deliciously addictive, precisely because the algorithms know what your little heart desires.
In some cases, however, there is more to a recommendation. For an online shop, there may be different margins for different product lines, and it has information that the engine itself does not; for example, people might not be buying ski gear now, but they damn sure will later in the year. Rubber Ducky Labs, a San Francisco-based startup, is looking to make it easier for teams to debug, analyze and improve their recommendation systems.
The team is working in a space that has a deeper trend: How do you know that the AIs are delivering good work? Increasingly, the algorithms do things that humans don’t fully understand – and without a feedback loop, it can get tricky.
[ad_2]
Source link