Edge Logo Metro

Market Opportunity Breakdown.jpg

Market Insight


Cloud Computing solutions, including Software, Infrastructure, Platform, Unified Communications, Mobile, and Content as a Service are well-established and growing. The evolution of these markets will be driven by the complex interaction of all participants, beginning with end customers.

Edge Strategies has conducted over 80,000 interviews in behalf of our clients in both mature and emerging markets with decision-makers across the full cloud ecosystem- including Vendors, Service Provider and End Customer organizations.

Typical projects include:

  • Identifying target market segments
  • Designing Service Portfolios
  • Designing Application and Services Features
  • Developing Value Proposition and Messaging for each customer segment
  • Analyzing competitive alternatives and determining best practices
  • Designing Activation Programs
  • Building process to reduce churn, build loyalty and measure Customer Lifetime Value
  • Improving the User Experience

We provide current, actionable insight into business decision processes across market segments, from SMBs to Large Enterprises. Our work leverages a deep understanding of the business models of key Cloud Ecosystem participants including:

  • Cloud Service Providers ( CSPs)
  • Web Hosting Providers
  • Communication Service Providers
  • ISVs and Automation Providers
  • MSPs and IT Channels

Our experience allows us to get up to speed quickly on new projects. We are experts in designing and conducting quantitative and qualitative research. Based on our focused findings, we work with our clients to make the decisions necessary to gain early success in a variety of markets, including SaaS, IaaS, PaaS, UCaaS, and mobile/device services.    

 

Related White Papers and Briefings - Register to access

  • Metro 1
  • Metro 2
  • Metro 3
  • Metro 4
  • Metro 5
  • Metro 6
  • Metro 7
  • Metro 8
  • Metro 9
  • EdgeHybrid

News

  • Adobe has released a video generator in public beta in its generative AI (genAI) tool, Adobe Firefly. The company calls the tool the first “commercially safe” video generator on the market. It has been trained on licensed content and public domain material, meaning it should not be able to generate material that could infringe someone else’s copyright. Firefly can generate clips either from text instructions or by combining a reference image with text instructions. There are also settings to customize things such as camera angles, movements, and distances. A paid subscription is required to use the video generator. Firefly Standard, which costs about $11 a month, gives access to 2000 credits; that should be enough for 20 five-second videos with a 1080p picture resolution and a frame rate of 24 frames per second. Firefly Pro, which costs three times more than the standard version, allows a user 7000 credits, which should be enough for 70 five-second clips in 1080p at 24 frames per second. Adobe plans to eventually release a model for videos with lower resolution but faster image updates, as well as a model with 4k resolution for Pro users.

  • Chatbots quickly surpassed human physicians in diagnostic reasoning — the crucial first step in clinical care — according to a new study published in the journal Nature Medicine. The study suggests physicians who have access to large language models (LLMs), which underpin generative AI (genAI) chatbots, demonstrate improved performance on several patient care tasks compared to colleagues without access to the technology. The study also found that physicians using chatbots spent more time on patient cases and made safer decisions than those without access to the genAI tools. The research, undertaken by more than a dozen physicians at Beth Israel Deaconess Medical Center (BIDMC), showed genAI has promise as an “open-ended decision-making” physician partner. “However, this will require rigorous validation to realize LLMs’ potential for enhancing patient care,” said Dr. Adam Rodman, director of AI Programs at BIDMC. “Unlike diagnostic reasoning, a task often with a single right answer, which LLMs excel at, management reasoning may have no right answer and involves weighing trade-offs between inherently risky courses of action.” The conclusions were based on evaluations about the decision-making capabilities of 92 physicians as they worked through five hypothetical patient cases. They focused on the physicians’ management reasoning, which includes decisions on testing, treatment, patient preferences, social factors, costs, and risks. When responses to their hypothetical patient cases were scored, the physicians using a chatbot scored significantly higher than those using conventional resources only. Chatbot users also spent more time per case — by nearly two minutes — and they had a lower risk of mild-to-moderate harm compared to those using conventional resources (3.7% vs. 5.3%). Severe harm ratings, however, were similar between groups. “My theory,” Rodman said, “[is] the AI improved management reasoning in patient communication and patient factors domains; it did not affect things like recognizing complications or medication decisions. We used a high standard for harm — immediate harm — and poor communication is unlikely to cause immediate harm.” An earlier 2023 study by Rodman and his colleagues yielded promising, yet cautious, conclusions about the role of genAI technology. They found it was “capable of showing the equivalent or better reasoning than people throughout the evolution of clinical case.” That data, published in Journal of the American Medical Association (JAMA), used a common testing tool used to assess physicians’ clinical reasoning. The researchers recruited 21 attending physicians and 18 residents, who worked through 20 archived (not new) clinical cases in four stages of diagnostic reasoning, writing and justifying their differential diagnoses at each stage. The researchers then performed the same tests using ChatGPT based on the GPT-4 LLM. The chatbot followed the same instructions and used the same clinical cases. The results were both promising and concerning. The chatbot scored highest in some measures on the testing tool, with a median score of 10/10, compared to 9/10 for attending physicians and 8/10 for residents. While diagnostic accuracy and reasoning were similar between humans and the bot, the chatbot had more instances of incorrect reasoning. “This highlights that AI is likely best used to augment, not replace, human reasoning,” the study concluded. Simply put, in some cases “the bots were also just plain wrong,” the report said. Rodman said he isn’t sure why the genAI study pointed to more errors in the earlier study. “The checkpoint is different [in the new study], so hallucinations might have improved, but they also vary by task,” he said. “ Our original study focused on diagnostic reasoning, a classification task with clear right and wrong answers. Management reasoning, on the other hand, is highly context-specific and has a range of acceptable answers.” A key difference from the original study is the researchers are now comparing two groups of humans — one using AI and one not — while the original work compared AI to humans directly. “We did collect a small AI-only baseline, but the comparison was done with a multi-effects model. So, in this case, everything is mediated through people,” Rodman said. Researcher and lead study author Dr. Stephanie Cabral, a third-year internal medicine resident at BIDMC, said more research is needed on how LLMs can fit into clinical practice, “but they could already serve as a useful checkpoint to prevent oversight. “My ultimate hope is that AI will improve the patient-physician interaction by reducing some of the inefficiencies we currently have and allow us to focus more on the conversation we’re having with our patients,” she said. The latest study involved a newer, upgraded version of GPT-4, which could explain some of the variations in results. To date, AI in healthcare has mainly focused on tasks such as portal messaging, according to Rodman. But chatbots could enhance human decision-making, especially in complex tasks. “Our findings show promise, but rigorous validation is needed to fully unlock their potential for improving patient care,” he said. “This suggests a future use for LLMs as a helpful adjunct to clinical judgment. Further exploration into whether the LLM is merely encouraging users to slow down and reflect more deeply, or whether it is actively augmenting the reasoning process would be valuable.” The chatbot testing will now enter the next of two follow-on phases, the first of which has already produced new raw data to be analyzed by the researchers, Rodman said. The researchers will begin looking at varying user interaction, where they study different types of chatbots, different user interfaces, and doctor education about using LLMs (such as more specific prompt design) in controlled environments to see how performance is affected.The second phase will also involve real-time patient data, not archived patient cases. “We are also studying [human computer interaction] using secure LLMs — so [it’s] HIPAA complaint — to see how these effects hold in the real world,” he said.

  • OpenAI will integrate “o3” into GPT-5 instead of releasing it separately, streamlining adoption while signaling a shift toward fewer, more controlled AI models amid rising competition and cost pressures. “In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3,” CEO Sam Altman said in a post on X. The decision marks a departure from OpenAI’s recent strategy of offering multiple model variants, suggesting the company is prioritizing ease of deployment and product clarity for enterprise users. “We want AI to ‘just work’ for you; we realize how complicated our model and product offerings have gotten,” Altman said. “We hate the model picker as much as you do and want to return to magic unified intelligence.” With enterprises facing rising costs for AI adoption and competitors like DeepSeek introducing lower-cost alternatives, OpenAI’s move could also be a response to market pressures. A single, more comprehensive model may help justify AI investments by reducing the complexity of integrating multiple systems while ensuring compatibility with OpenAI’s broader ecosystem. OpenAI will also launch GPT-4.5, codenamed “Orion,” as its final model without chain-of-thought reasoning, Altman added, without providing a timeline. A change of approach The rapid proliferation of AI models has intensified competition among research labs, each striving to develop smarter, more efficient systems with larger context windows and specialized functions. While this innovation has expanded capabilities, it has also introduced complexity, making it harder for users to choose the right model. “The burgeoning list of models has added complexity for the average user who just wants chat to work without having to figure out which model to use,” said Abhishek Sengupta, practice director at Everest Group. “For developers, it’s a mixed bag – on one hand it takes away the need to incessantly check which model is best suited for which task (at least for OpenAI) but on the other hand you are outsourcing your choice of optimal model to OpenAI.” While model selection may still occur, OpenAI could handle the process rather than users. Analysts suggest this could also be an attempt to avoid the race between model performance and cost by bundling all AI capabilities under a single system. “Maybe the consolidation of models into a single source of intelligence is a move toward creating an intelligence platform,” Sengupta added. “Maybe that’s the differentiation they are placing their bets on. Time will tell.” Rising competition and open-source threats This shift could also reshape the economics of AI, giving OpenAI greater control over costs, deployment, and market positioning. “I believe merging it has multiple benefits, not just in terms of costs related to training, go-to-market strategies, and customer delivery, but also in giving OpenAI more leverage to drive it as a ‘system’ and extract more value through a simplified business model,” said Neil Shah, partner and co-founder at Counterpoint Research. “This will change the economics on both ends, which investors will be keen to monitor and measure.” This comes at a time when AI competition is intensifying, with DeepSeek disrupting the market with cost-effective models, highlighting the pressure on OpenAI to refine its strategy. “One cannot rule out this move being triggered by competitive models like DeepSeek, which are highly cost-effective,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Of course, there shall be many other models out there that will be more cost-effective and innovative, and most importantly, will be made open source and not proprietary like OpenAI.” Importantly, not all organizations have the resources, need, or strategic planning to navigate complex, tiered pricing structures. “Despite the rise of SaaS, many large enterprises prefer EULA contracts since they are incubated from any risk associated with sudden and unplanned need for resources,” Gogia added. “In the same breath, not all organizations require a customized model and the flexibility that comes along with it. Many of their use cases are simplistic enough to use a model that keeps the billing and the use simple.”

  • When the EU on Tuesday said it was not, at this time, moving ahead with critical legislation involving privacy and genAI liability issues, it honestly reported that members couldn’t agree. But the reasons why they couldn’t agree get much more complicated. The EU decisions involved two seemingly unrelated pieces of legislation: One dealing with privacy efforts, often called the cookie law, and the other dealing with AI liability.  The EU decisions are in the annexes to the Commission’s  work programme for 2025, in Annex IV, items 29 and 32. For the AI liability section (“on adapting non-contractual civil liability rules to artificial intelligence”), the EU found “no foreseeable agreement. The Commission will assess whether another proposal should be tabled or another type of approach should be chosen.” For the privacy/cookie item (“concerning the respect for private life and the protection of personal data in electronic communications”), the EU said, “No foreseeable agreement – no agreement is expected from the colegislators. Furthermore, the proposal is outdated in view of some recent legislation in both the technological and the legislative landscape.” Various EU specialists said those explanations were correct, but the reasons behind the decisions from those member countries were more complex.  Andrew Gamino-Cheong, CTO at AI company Trustible, said different countries had different, and incompatible, positions. “The EU member states have started to split on their own attitudes related to AI. On one extreme is France, which is trying to be pro-innovation and [French President Emmanuel] Macron used the [AI summit] this past week to emphasize that,” Gamino-Cheong said. “Others, including Germany, are very skeptical of AI still and were pushing for these regulations. If France and Germany are at odds, as the economic heavyweights in the EU, nothing will get done.” But Gamino-Cheong, along with many others, said there is a fear that the global AI arms race may hurt countries that impose too many compliance requirements.  The EU is seen as “being too aggressive, overregulating” and “the EU takes a 2-sentence description and writes 14.5 pages about it and then contradicts itself in multiple areas,” Gamino-Cheong said.  Ian Tyler-Clarke, an executive counselor at the Info-Tech Research Group, said he was not happy that the two proposed bills did not go forward because he fears how those moves will influence other countries.  “Beyond the EU, this decision could have broader geopolitical consequences. The EU has long been a global leader in setting regulatory precedents, particularly with GDPR, which influenced privacy laws worldwide. Without new AI liability rules, other regions may hesitate to introduce their own regulations, leading to a fragmented global approach,” Tyler-Clarke said. “Conversely, this could trigger a regulatory race to the bottom, where jurisdictions with the least restrictions attract AI development at the cost of oversight and accountability.” A very different perspective comes from Enza Iannopollo, a Forrester principal analyst based in London.  Asked about the failure to move forward on the privacy bill, Iannopollo said, “Thank God that they have withdrawn that one. There are more pressing priorities to address.” She said the privacy effort suffered from the rapid advances in web controls, including some changes made by Google. “Regulators were not convinced that they would improve things,” Iannopollo said. Regarding the AI liability rules, Iannopollo said that she expects to see those come back in a revised form. “I don’t think this is a final call. They are just buying time.” The critical factor is that another, much larger piece of legislation, called simply the EU AI Act, is just about to kick in, and regulators wanted to see how that enforcement went before expanding it. “They want to see how these other pieces of the framework are going to work. There are a lot of moving parts so (delaying) is wise.” Another analyst, Anshel Sag, VP and principal analyst with Moor Insights & Strategy, said that EU members are very concerned with how they are perceived globally. “The real challenge is that applying regulations too early, without the industry being mature enough, risks hurting European companies and European competitiveness, which I believe is a major factor in why these regulations have been paused for now,” Sag said. “Especially when you consider the current rate of change within AI, there’s just a chance that they could spend a long time on this regulation and by the time it’s out, it’s already well out of date. They will have to act fast, though, when the time is right.” Added Vincent Schmalbach, an independent AI engineer in Munich, “The most interesting part is how this represents a major shift in EU thinking. It went from being the world’s strictest tech regulator to acknowledging they need to focus on not falling further behind in the AI race.” Michael Isbitski, principal application security architect for genAI at ADT, the $19 billion HR and payroll enterprise, and also a former Gartner analyst, sees the two proposed EU legislative efforts as potentially having had a massive impact on data strategies. The proposed AI rule, he said, involved the retention of AI-generated data logs. “Everywhere there is some kind of AI transaction, you need to retain those logs, for every query, anywhere,” Isbitski said. “Think about what needs to be done to secure your requirements and controls systems, along with your cloud security. Logging seems simple, but if you look at a complete AI interaction, there are an awful lot of interconnects.” However, Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions, said the pausing of these two EU potential rules will likely have no significant impact on enterprises. “This means you can continue to do everything you were doing before. There will be no new constraints on top of anything you were doing,” Villanustre said. But the broader issue of genAI liability absolutely needs to be addressed because the current mechanisms are woefully inadequate, he said.  That is because the very nature of genAI, especially in its stochastic and probabilistic attributes, makes liability attribution virtually impossible. Let’s say something bad happens, for example, with an LLM deployment where a company loses billions of dollars or there is a loss of life.  There are typically going to be three possible groups to blame: the model-maker, which creates the algorithm and trains the model; the enterprise, which finetunes the models and adapts it to that enterprise’s needs; and the user base, which would be either employees, partners, or customers who pose the queries to the model. Overwhelmingly, when a problem happens, it will be because of the interactions of efforts by two or three of those groups. Without the new legislation being proposed by the EU, the only means of determining liability will be via legal contracts.  But genAI is a different kind of system. It can be asked the identical question five times and offer five different answers. That being the case, if its developers cannot accurately predict what it will do in different situations, Villanustre wondered what chance attorneys have at anticipating all problems. “That is a challenge: determining who has the responsibility,” Villanustre said. “This legislation was meant to define the liability outside of contracts.”