Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Meta AI releases CoTracker, a model for tracking any points (pixels) on a video (co-tracker.github.io)
345 points by crakenzak on Aug 29, 2023 | hide | past | favorite | 84 comments


Does anyone understand the business angle for meta here with these models? I still don’t understand why their research division exists and how it relates to their core business. I’m a huge admirer of their work but don’t understand the why.


- It degrades Googles and OpenAis ability to monetize. That alone is worth gold because tech is not a blue ocean anymore - neither from a market opportunity nor from an investor perspective. Your competitors loss is your gain. They can’t use money they don’t make to mount attacks to dislodge customers, investors, mindshare from you. Yes, Meta is weaponizing "open" to hurt their competitors, but we're in a capitalism world, that's about as good as it gets unless you want purity of motives.

- Secondly it commoditizes AI. Zuck believes that his platforms ultimately will benefit if there is more content. Because his platforms sell the ability to show YOUR content over the rest, for coin. Just as with news/mobile games, driving the value of your content lower and lower by fostering the creation/supply of more of it is good for FB and bad for you. You have to advertise to rise above the noise. (See commoditize your complements)

- Thirdly it reinforces the investor narrative that the company has a future because AI. That's important, because by now consensus is that ads future is under pressure of market saturation, regulation, competition (Apple's ability to hurt Meta at will, Amazon's increasing ads monetisation, etc.). There's more than one company that needs that narrative to land, but combined with hurting google, this is pretty good for a company that didn't have a generative AI strategy and was betting on the metaverse a year ago.

- Fourth, it captures open source momentum behind your limited internal efforts. Llama, when it was first released, was not competitive - easily a year behind Google's and more behind OpenAI's efforts - but by being first to capture thousands of capable contributors, just a few months later it is one of the most efficient and scalable LLM systems out there and incorporating many innovations days after their paper releases. Open models alone, without their foundational model, would have tugged along for a while before reaching this state.

What's fascinating about this situation is how it shuffles the big tech positioning. Meta and Apple, purely on a personal leadership bases, are hating each other's guts, but both Apple and Meta need Google/OpenAI to fail, for different reasons


I agree with all of your points, but I think many of these models may be to support their ambitions in VR. One because it allows them to build VR devices that can do things like track objects, but also open sourcing it may mean there are more developers who can build applications which can do similar things.


Not only that, but their generative models could contribute significantly to the problem of content production for the metaverse. If the end goal is for anyone to be able to construct their own virtual world, they'll need a way to translate fuzzy human commands into actual assets.


> but also open sourcing it may mean there are more developers who can build applications which can do similar things.

and they don't need to spend money on in-house developers and their training. If any company makes a useful and groundbreaking development, they can just buy them - it saves them a lot of money in the long run.


Even though I tend to agree with these, I feel uneasy about them being stated as facts.

There could be more mundane _intended_ reasons for doing research, such as working on a VR program, as mentioned in a sibling comment.

Some people believe that upper management of these companies are brilliant masterminds who play chess with the world, but, personally, I highly doubt that such a thing is even possible.


I worked there (no specific insights into the situation though), I have no such illusions of brilliant leadership but at the same time, if this course of action seems the logical move for me, so it will be to any number of other people.

I don’t think these were long term planned moves, more pivots to react to a rapidly changing external environment that required decisive action and were made from a limited palette of options.

It requires only one of these benefits to be the goal as well, the rest is cherry on top - Stability for example did a similar move (give it away) when faced with the funding environment and a much deeper funded opponent and in general it worked out well for them and made what happened with LLama a predictable option with several positive effects (investor virtue signaling on AI, open source contributions, avoidance of negative externalities since they “leaked” the model rather than operating it )

Ultimately though, these are real effects - the reasons why the actions were taken may control the framing (brilliant strategists vs accident genius vs desperate cornered CEO), I am not sure that framing matters all that much.


> Ultimately though, these are real effects - the reasons why the actions were taken may control the framing (brilliant strategists vs accident genius vs desperate cornered CEO), I am not sure that framing matters all that much.

Seems to me the state of mind of leadership is pretty crucial in knowing what the future of these models will be.

If the strategy is to release interesting-but-not-business-relevant R&D results as a reputation-building/recruiting tool, we might expect other models in the future, but no big effort to maintain models after release, and models that make a big competitive advantage to be kept secret.

If the high level strategy is to enable ML content creation so you can sell ads alongside it, we might expect Facebook to adopt friendly policies towards such content.

If the strategy is to gain investor confidence about the future of the company, we might think after their reputation is established, flashy demos of closed models will serve just as well as open models.

If the strategy is to mess up Google's revenue streams with for no particular benefit to Meta, it's hard to know what else we'd expect - or if peace could break out and their strategy might change completely.


- Secondly it commoditizes AI. Zuck believes that his platforms ultimately will benefit if there is more content. Because his platforms sell the ability to show YOUR content over the rest, for coin. Just as with news/mobile games, driving the value of your content lower and lower by fostering the creation/supply of more of it is good for FB and bad for you. You have to advertise to rise above the noise. (See commoditize your complements)

It's interesting though becaue now I know there is more and more AI generated content on IG, I actually feel like it's less worth my time to be on there. I already thought it was a stupid activity, looking at fake people do literally nothing is just the icing on the cake for me.


Why do I feel like I read this before? Did you comment the same somewhere else


I made a shorter version of this comment on a different thread


Maybe it is just Zuck trying to atone himself for the damage he brought to the world. Open source AI (as in various ML/AI models developed and inferring in people's own devices) is a game changer with ramifications that are still hard to discern. E.g., what does "user content" mean anymore when you can interpolate and extrapolate from any publicly available bit of information to create infinite "near replicas"?

Rationally, in such an uncertain future where the technical, legal and social boundaries and guardrails are not even sketched yet what makes sense is to keep near the ring but not in it (no need to receive random punches given you are already the designated punchbag) and above all: keep fit.


Is there any, I mean any indication at all Zuck would be ready to atone for anything?

His teary “I take responsibility” perhaps, backed by which action?

His reduction of integrity staffing around the world perhaps?

His “we connect the world” and because connecting the world is good?

His shutting down of transparency reporting maybe?

He’s done this exact game before, commoditize your complements.


> He’s done this exact game before, commoditize your complements.

I would not argue with your previous points (I would be a very unlikely defender of Zuck's worldview :-) but this last one takes some discussion. It is years after pytorch (mentioned in the article as the google/tensorflow competitor) was released and its not clear which complement has been commoditized, if any. Tensorflow is also open source so it cannot be further commoditized. The main entities affected by the rapid development of open source ML platforms are niche proprietary vendors of algorithmic tools - hardly Meta's complement.

People have advanced various reasons (keeping internal staff happy, attracting talent, buying good will etc) but releasing various AI tools in order to ensure "continuing commercial dominance in adtech" would be a very long game indeed. Happy to be enlightened if somebody can connect the dots deep into the future.


Pytorch fits in the same bucket as datacenter hardware. By giving the designs away they drove the price down, made it easier and cheaper to hire people familiar with the technology and reduced the risk of sitting on an outdated stack being overtaken by someone else (or worse, someone else capturing open source momentum).

What happened with llama and stable diffusion is (another) cautionary tale for anyone gambling on proprietary technology advantages.

Your competitors can hurt you both directly and indirectly if you leave the open source route open and Sam Altman being on world diplomacy tours selling regulators on the dream they can control AI China style by regulating a few willing large companies makes a lot more sense when you compare the open source trajectory against proprietary models (and because Meta doesn’t charge, they are a slippery fish to action).

Meta easily shaved billions of dollars from the proprietary market over time at very little cost to themselves because LLama was an existing long term investment whose primary costs were already incurred at release time and it had no serious market viability as a proprietary model without vastly more investment and cost. It’s the open source contributions that made it competitive.


* Meta's products are generally higher tech than people give them credit for. They have a lot invested in having great image and video tech.

* At the scale Meta is at, and at the margins they can operate at, R&D is a valid use funds. Apple has revenue at ~80B/yr. Microsoft have revenue of ~60B/yr. Meta has revenue at ~30B/yr. No one raises an eyebrow when Microsoft announces some go-no-where R&D project.

* I think there might be a Meta bias towards anything with the word "tracking" in it.


I think you meant mean these revenue numbers are per quarter, not per year.


My hypothesis is that they want to get the absolute mindshare and adoption and decide the right product/solutions along the way. Note that there are trends (web 1, web 2, mobile etc.) and the leaders tend to grow significantly.

It is a mad rush time with the new generation of AI, and future products are going to be amazing. Meta cannot afford to have just 1 cash cow. Metaverse failed, but they have a real shot here.

A few things I can think of (beyond social networks).

- Enterprise push, starting with very high-end enterprises. Their license structure (free below X million users) allows it nicely. They can afford to have a small team and yet win big. Note that typically 80% of any enterprise products revenue comes from say, 10-20% of top customers.

- Build a platform. Meta learned from Google, and then Apple debacle on how dependent the world is on platform. What would the new OS look like?

- Weaken Google and Apple. Some indications on why they partnered with Microsoft - this also ties to the point above.


I believe attracting AI talent could also be an important motivation. Many AI researchers may appreciate the open nature of their work, and having world-class AI researchers could be very important strategically for Meta.


Tracking is a core competency for VR headsets. It could also be used for AI-assisted video production, which is the bread and butter of their advertising platform. Users generate videos, and with more powerful editing tools they can churn out more interesting content than TikTok.


Are those kinds of algorithms used in VR eye tracking systems?


Eye tracking, probably not, but localizing the VR headset in space without needing an expensive rig in your living room is cool. Or perhaps even being able to build a 3D reconstruction of your living room, localize yourself within it, send the reconstruction to your grandma and let her join you in your own living room.


Commoditizing your compliment (https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/)

Meta runs on UGC (User Generated Content), so making it easy and cheap to make great content is in their best interests. If they can reduce the cost of making a Billboard 100 quality song to $0, or make it so anyone with a 1.3MP webcam can use AI to get MKBHD level quality streams going, they:

- increase the amount of content being generated

- increase the quality of that content

- make the content cheaper to acquire/license (since there's dramatically more supply)


I understand but Code LLama or CoTracker do not create content for Facebook/Instagram. Moreover, AI-generated content is not so engageable as a human-generated content. I don't think the purpose is to have more content at lower price.

The theory makes sense, but it's not clear what its complement is?


> AI-generated content is not so engageable as a human-generated content

Just wait until social media companies start to train transformers on engagement metrics.


> Mark Zuckerberg justified the move in a Facebook post: “Open source drives innovation because it enables many more developers to build with new technology. It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.” [0]

I don't work in the space so I can't comment much, but my team (Server Operating Systems and similar) has been working with Linux, Systemd, Centos, etc for years for essentially the same reason. Working with the community forces you to build better stuff, because, in a healthy community, no one cares that a "Director from Meta wants it" which is the kind of thing that kills proprietary projects in the cradle all the time.

[0] https://www.vox.com/future-perfect/23817060/meta-open-source...


I don’t use any of meta’s products but i must say Meta and Mark Z’d support for open source is commendable. They’ve released and supported some excellent open source models and frameworks


My guess is to 1) stunt the growth potential of their competitors (google, microsoft) before they acquire some sort of generational lead that forces everyone onto their platforms and 2) reset a level playing field for a new generation of startups to someday acquire, that are built on THEIR way of doing things.


Aside from all the points mentioned by the other comments, there's also the factor that Zuck probably finds this kind of research cool, and considering he owns the majority of the votes, he can do almost whatever he wants with Meta.


While 'commoditizing your complements' is one of their strategies. I think it also helps them attract top-tier talent, as who doesn't want the whole world to see and use your cutting-edge developments.


I remember a blog post on HN from an ex-DeepMind/Google Brain person who postulated that the research group existed for visibility; you use the research to attract talent who sees the cool stuff you work on and gets excited, but once hired funnel that talent elsewhere in the company to attract clicks or generate money. Research as advertising.


> Does anyone understand the business angle for meta here with these models?

Pixel tracking is pretty foundational to Augmented Reality: you can calculate objects' position, orientation and speed - it is obvious to me why Meta would do this research, and seed the models to the wider world who can dream up applications for it (in both sense of the word). When you consider other research projects Meta released (image segmentation, digital twins, etc), the AR/VR play becomes a little more obvious.

I can't wait until someone integrates these tracking models with Blender, as a non-expert in object-tracking - this appears to be superior to the tracking pipeline that was available in Blender a few years ago when I looked into it. The fantastic thing about open source is that the project can be used in ways the author never considered.


Industrial history is full of freewheeling corporate research departments inventing the future for the benefit of all stakeholders. And of dangerous examples what happens when you either 1) lack this innovative capacity or 2) lack the foresight or courage to leverage the new technology.

As a ginormous company, given the historical trends, Meta would be rather poorly positioned if it didn't have a research wing.

We can view this also from the point of view how the resources are used for the benefit of innovation. Some companies have a distributed innovation budget (ie. 3M:s 15% time for all). A research department composing 15% of Meta:s personnel costs would be rather huge (I have no numbers what the actual cost is). So having a dedicated research function can be also viewed as a rather cost effective way to stay ahead of the curve if benchmarked in this context.


One way to look at it is it actually prevents competitors from making money.

You release something for free - other companies that could have released it as a product no longer have a business.

It’s no different from the approach of trying to hire all the top tech talent. You deprive your competitors of resource.


Meta has one of the largest supplies of proprietary data in the world (along with Microsoft and Apple). Big improvements in the world's ability to analyze data benefits them tremendously. If they're able to, for example, identify objects in videos and what's happening with them, they can make better inferences about the content of the videos, which lets them make better recommendations, causing people to watch more videos on FB/Instagram and giving them more ad revenue.

Publishing their models doesn't really enable any of their competitors (TikTok is their only real competitor at all), and being a leader in the community and getting top talent to work on these models benefits them.


While they can’t leapfrog Google or OpenAI, they can introduce a nearly unlimited number of competitors and new entrants to the market. All of whom will be aligned to building on this foundation.

Either you claim the ring for yourself or you destroy it.


Well they managed to convince the Deep Learning ecosystem to move from using Tensorflow and to instead use PyTorch.

So there's that. But again Meta's involvement in AI can't be ignored

> they can introduce a nearly unlimited number of competitors and new entrants to the market. All of whom will be aligned to building on this foundation.

Exactly this. They are creating an ecosystem around PyTorch with the best models being available there.


At the very least, think about how this can be used for features in Instagram. I can't talk about specifics, but there are real applications in existing products.


Can't speak to the overall AI Labs model reasearch but this pixel tracking is directly tied to computer vision and what they are doing at Reality Labs advancing Mixed Reality passthrough with geometry mapping of your surroundings and hand based gesture controls. There is a lot of computer vision baked into those chips and while this is published I am sure they have productized it using more advanced proprietary methods under patent.


Many blue-sky research divisions end up not costing as much as you'd imagine. Patent licensing, the defence value of a big patent portfolio, the occasional 'demo' which turns out to be a huge success and you're best placed to turn into a commercial product in a new field, etc.


It’s a new framework, a new language, and a platform. It’s like open sourcing React, opening Java, or opening the web browser. Controlling the platform has tremendous benefits despite giving it out for free.


I think it is largely driven by LeCun, he has the Bell labs DNA


Watch Caprica (Battlestar Galactica prequel)


Trying to regain some relevance in a world that has largely moved on from caring about it's core product and the added flop of the 'metaverse'. Without doing things like this they won't be able to compete with the large number of AI/ML startups in hiring space either.

It's a big PR bump, can you even remember the last time someone said something positive about that company? For me I think it would be easily more than 10 years ago.


Whilst I generally agree with the consensus in this thread that they're trying to starve competitors of cash, I'm not particularly bullish about Meta's future.

They've sunk an obscene amount of cash into a failed product (the Metaverse), and their core business -- advertising -- is under threat from increased competition and falling engagement with their main platforms. Is Threads winning? I don't think Threads is winning, or that it will.

To me this feels like a big ship's going down, and all the glorious loot is being thrown off to shed weight, and it's floating ashore on our island.


It’s weird, I want to say the same, but then realize everyone I know is using some sort of FB product. Facebook for local groups and marketplace, Instagram for messaging/posting, Whatsapp is ubiquitous outside of USA.

Regarding Threads, I’m also not sure. I don’t think it’s being used widely, but that’s how Reels started as well. Everyone wrote it off as TikTok clone, but now I get to see people use it while I’m on public transport in different parts of the world.

Agreed about the Metaverse part though. Oculus is still one of the things you try out for a couple of hours, and then forget about it.


I'm getting down-voted, so I feel I should fight my corner a little bit.

Yes, their products are ubiquitous, but as someone further down noted, some of the most important aren't monetised. Their ubiquity also came from their first-mover status and network effect, which is severely diluted in 2023 and I can't see improving over time.

FWIW, I also live outside of the US and EU, and despite once being the only real platform, I'm seeing TikTok and Telegram coming up fast these days. Their position came from monopoly, which just isn't there any more.

And then there's the fact that advertising is also undergoing something of a shakeup at the moment. I work for an org that does a lot of ads, and everyone's freaking out over losing our ability to do targeting as we did. The devil will of course be in the details, but I can't see this being good at all for FB's bottom line.

And this is before we get to adblockers, apparently now up to >50% of users if Hootsuite are to be believed [0]; and the fact they've been bullshitting on ad metrics, a core revenue issue now at the class action phase [1].

My somewhat colourful analogy re: ships and islands may not have been quite on the nose, but as a take I wouldn't say it's awful. Meta need a serious reinvention and several decent new revenue streams if they're going to cut it going forward.

They had a plan, it was the Metaverse, and it sucked. They missed the boat on monetising their AI tools -- something I'm eternally grateful for, but can't see them fixing their core problems.

[0] https://backlinko.com/ad-blockers-users [1] https://www.reuters.com/business/facebook-advertisers-can-pu...


How does whatsapp make revenue? I've used it for years. I don't pay for it, I never see ads in it. They claim it is 'secure' and my conversation data isn't used for anything. Even if the last is untrue; and I wouldn't be particularly surprised if it was, is simply mining those conversations sufficient to generate the revenue to pay for server costs? How does it work?

Edit: Business APIs and payment fees is apparently the answer. I'm genuinely impressed I've been able to use it all these years for free without ever having a hint of this.


for what i recall, the actual messages on whatsapp are end-to-end encrypted, metadata and attachments are not. It is just another mass data gathering service


Not surprised that this performs so well, considering Facebook’s long experience with tracking pixels.


( A "tracking pixel" is a 1x1 remote image embedded in an email (or website) whose main purpose is that loading the URL confirms that the email was opened. )


long experience with tracking.


You made the joke worse.


Come on, I was at Molière's level of comedy !



Demo has OOM error =(


I think it's meant to run in a private huggingface space, it's trying to allocate 46GB of VRAM.

Edit: Hm, seems to work fine now regardless


Neat. It's mentioned on Facebook's page, but here Google's version of point tracking: https://deepmind-tapir.github.io which is Apache-2.0 licensed.


I think Meta's goals are becoming clearer: they want to make VR unbeliavable. Judging by this and by SAM, they want an AI system than can understand the world around itself in real time.


People do seem to vastly under estimate the time span over which Zuckerberg thinks and the end state he is going for. His vision is for a Facebook-like ubiquity of the tech. He sees AI as the pathway there because it means he can remove so many of the hardware sensors he otherwise would need that will be barriers to get the cost low enough. This is how he will make a little bit of money each from literally billions of users the way he has with Facebook. At this stage of development, he just needs these AI problems solved however that happens, whether it's internal or external. Releasing the tech only accelerates his broader version.


I wonder how research works inside Product companies.

As an engineer working in a Product company, my focus switches between priorities set by PM and quarterly re-adjustments of the goals/strategies.

Can't imagine same thing can be done in research:

    * Hey, when are we releasing model for tracking any points?
    * Can you estimate how long it takes to fix the issue you found in tracking accuracy?


I work as a researcher (ML for robotics) inside a product company so I can speak to this a little. The research I work on generally takes a year between conception and publication (three months of tinkering, six months of refinement and benchmarking, three months to write and submit a paper). Now, I work on more than one of these projects at a time but they all have a similar cadence (conference deadlines do that for you), and can be "measured" by management the same way a year-long engineering project is. The questions I'm asked (or ask myself) and the same as when I was a pure engineer. Are you making headway? Are you on schedule? Do you need help? Should this project be abandoned or rescoped if it's not succeeding? Etc.


Is Microsoft a Product company in your eyes? MSR has existed for a pretty long time and they seem to do a mix of self-directed research, publishing papers, etc, and talking to the other teams about the problems they have (eg excel getting FP or programming-by-example features had some MSR involvement. I don’t know how involved MSR were in stuff like some of the novel techniques used in bing or the RDMA-over-Ethernet things Microsoft tried to get to work in their datacentres (don’t recall if private or azure) – those things surely do count somewhat as research even if not MSR).


I wonder how this compares to the motion estimation algorithms in the x264 and x265 video codecs. If it's better, then it can be used to increase video compression, by using it at the motion estimation stage for these codecs.


15 years ago on single CPU core laptop https://www.robots.ox.ac.uk/~gk/PTAM/

Parallel Tracking and Mapping for Small AR Workspaces (PTAM) https://www.youtube.com/watch?v=Y9HMn6bd-v8

No ML, pure computer vision in C.


PTAM was a solution to a very different type of problem. Did you look at the videos? They are almost exclusively tracking points on soft, flexible bodies which is not possible with PTAM.


OK, 11 years ago then. This time with some very primitive ML: OpenTLD https://www.youtube.com/watch?v=1GhNXHCQGsM&t=228s

Was good enough for commercialization http://tldvision.com. I think their tech is powering this https://www.autosport.com/f1/news/ai-replays-and-more-augmen...


These open source AI model will create hundreds of AI startups with no competitive advantage. High competitive market and low margin.


Which is clearly a good thing for consumers and for startups who would otherwise have no chance of competing with tech behemoths like Google and MS.


Nice to see that Andrew Zisserman made it into the AI age. He and Hartley were my multi-view heros back in the days... And Faugeras of course...


One present modern challenge I’ve noticed is reverse video searching. There are no good platforms for this like there are reverse image. I wonder if this ability to quantize videos would allow you to build more efficient indices of videos that you could check against from some input.


Oh wow, this will be the end of complicated motion capture rigs. Indie developers can do motion capture for their own complex 3D characters in the comfort of a home. Good.


Are there models that can perform this in real-time? How does this stack up?


I'm sure there will be some future AR applicability with this


LICENSE

Attribution-NonCommercial 4.0 International

https://github.com/facebookresearch/co-tracker/blob/main/LIC...


This is sadly the standard license for work produced at Facebook AI Research (FAIR) going back at least three or so years. Most researchers are not deeply invested into the details of licensing, so many of them simply assume it is properly open when they see Creative Commons and move on to do more research. This of course ignores the obvious problems with non-commercial licenses, such as the fact that the definition of a commercial entity might very well involve large chunks of the academic world.

Thus, the choice of whatever bloodsu^Wlawyers inside Facebook that pushed this through as the default has had very nasty consequences. However, researchers inside FAIR can apply internally to have an actually open license applied to their work and I have never heard of it being denied. It is not fun (paperwork), but once you sit down and calmly explain the problems with non-commercial licensing and how it likely limits the uptake of their work, many will start pushing for it. My advice is thus just to try to stay positive and talk to the researchers about this rather than try to shame them (not that I have seen this in this thread).

It should also be stated that although this licensing is not "good enough", it is still better than "Open"AI, DeepMind, and many others that gladly just sit on all but a miniscule amount of the code, data, and models they produce.


Thank you for the great explanation and actionable suggestions.

I have to say, some of Meta's past open source and recent open-ish AI model releases have been big wins for the company's image in my mind.


Can't upvote this enough.

> It should also be stated that although this licensing is not "good enough", it is still better than "Open"AI, DeepMind, and many others that gladly just sit on all but a miniscule amount of the code, data, and models they produce.

This is true, but Google Brain has historically been pretty good too.


I think all FAIR researchers and engineers are aware of the NC-CA license limitations.

They still use it because releasing the code for an already approved paper under NC-CA is super easy (~ self-approving by clicking a few buttons) vs following a slow open-sourcing process needed for the MIT license (includes approvals from a sponsoring director or two, committing to support the code for at least a year, etc). Releasing under MIT can easily take a few weeks with each stage requiring finding someone responsible and chasing them across time-zones.

The best practice seems to release under NC-CA and re-license it later under MIT. Maybe this will happen here, too.


Fortunately, "Tracking Everything Everywhere All at Once" ([1], [2]) is Apache 2.0 and it's state of art.

1. https://omnimotion.github.io/

2. https://github.com/qianqianwang68/omnimotion


I wonder how useful these models are going to be for "Tracking Everybody Everywhere All at Once".


Is that a sincere question or are you being glib? These models are still expensive to run at scale and probably don't compete with the phone in your pocket for tracking accuracy over a long time.


Hmm would this be compatible with GPL ? I think not but not sure.

Anyway this seems like it would be killer for blender to integrate camera tracking.


> Hmm would this be compatible with GPL ? I think not but not sure.

Not a lawyer, but I don't believe so. The Free Software Foundation say of CC-BY-NC "This license does not qualify as free, because there are restrictions on charging money for copies."[0] The same page says "A nonfree license is automatically incompatible with the GNU GPL."[1]

[0] https://www.gnu.org/licenses/license-list.html#CC-BY-NC

[1] https://www.gnu.org/licenses/license-list.html#NonFreeSoftwa...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: