Categories
Application Hints & Tips Microsoft Power BI

Power BI Gauge: Mastering the Art of Data Speedometers

Power BI’s Gauge visual is a champion for presenting data in a clear and captivating way.

Imagine speedometer-like dials instantly conveying performance against targets – that’s the power of gauges!

This blog post dives into common gauge limitations and equips you with solutions to create impactful visuals.

Power BI Gauge: Mastering the Art of Data Speedometers

Gauge Advantages: Why They Shine

  • Clarity at a Glance: Gauges efficiently showcase a single value with a range, making data trends and comparisons readily apparent.
  • Visual Harmony: Combine gauges with other visuals like charts and tables for a well-rounded data story.
  • KPI Champion: Gauges excel at displaying progress towards goals (KPIs), allowing you to monitor performance with ease.

Conquering Gauge Challenges: Solutions for Common Issues

While powerful, gauges can present a few hurdles. Let’s explore two common scenarios and their solutions:

Power BI Gauge: Mastering the Art of Data Speedometers

Challenge 1: Target Line Disappearing Act

The default gauge sets the “maximum value” (target) to double the current value. This can cause the target line to vanish if the previous month’s sales (target) are more than twice the current month’s sales.

Solution: Create a custom measure to dynamically adjust the maximum value. Here’s the logic:

  1. Check if the target is more than double the current value.
  2. If yes, set the maximum value to the target + a small buffer (e.g., 200,000) to avoid the target line merging with the max line.
  3. If no, set the maximum value to double the current value (default behaviour).

Power BI Gauge: Mastering the Art of Data Speedometers

Challenge 2: Gauge Loses Focus When Filtered

Imagine using a gauge to compare regional sales against an overall average. When you filter by region, the gauge might only reflect that region’s performance, losing sight of the overall average.

Solution: Utilise the ALL function within a CALCULATE function to tell Power BI to ignore filtering for specific measures. This ensures the overall average remains displayed regardless of region filters.

Here’s how to adjust your measures:

  • Overall Average (Unaffected by Region Filters):

Average all regions = CALCULATE(DIVIDE([Sales],DISTINCTCOUNT(Customers[Region]),0),ALL(Customers[Region]))

  • Maximum Value (Unaffected by Region Filters):

Sales all regions = CALCULATE([Sales],all(Customers[Region]))

Conclusion: Mastering the Gauge for Impactful Data Communication

By understanding its strengths and overcoming limitations, you can transform the Power BI Gauge visual into a powerful tool for clear and impactful data communication. Remember, gauges excel at conveying key metrics and trends, making them a valuable asset for data analysis and reporting.

Further reading

Benefits of Using Power BI – blog

Improve Communication of Data Using Power BI Dashboards – blog

 

 

Categories
Artificial Intelligence Technology

How to use GenAI responsibly and securely at work

A practical guide for avoiding common pitfalls and staying ahead of the curve.

GenAI is a powerful tool that can help you automate tasks, optimise processes, and generate insights at work. But it also comes with some security and ethical challenges that you need to be aware of and address. In this blog post, we will explore some of the most common dilemmas that GenAI users face, and how you can avoid them or overcome them. We will also look at the upcoming EU legislation on AI, and what it means for your business.

Security dilemmas

How to use GenAI responsibly and securely at work

One of the main security risks of using GenAI is that it can expose your data to unauthorised access, manipulation, or theft. This can happen if you use untrusted or malicious GenAI models, if you share your data or models with third parties without proper safeguards, or if you fail to protect your GenAI systems from cyberattacks. Here are some examples of how this can go wrong:

  • A company used a GenAI model to generate marketing emails for its customers, but the model was infected with malware that inserted phishing links into the emails. The company lost thousands of dollars and damaged its reputation.
  • A researcher used a GenAI model to analyse sensitive health data, but the model was trained on data from another source that had different privacy policies. The researcher violated the data protection laws and faced legal consequences.
  • A manager used a GenAI model to optimize the production schedule, but the model was hacked by a competitor who changed the parameters and caused delays and losses.

To avoid these security dilemmas, you need to follow some best practices when using GenAI:

Only use GenAI models from trusted and verified sources and check their security certificates and ratings.

Only share your data and models with authorised and reliable parties and use encryption and authentication methods.

Regularly update your GenAI systems and software and use antivirus and firewall programs.

Monitor your GenAI activities and outputs and report any suspicious or abnormal behaviour.

Ethical dilemmas

How to use GenAI responsibly and securely at work

Another major challenge of using GenAI is that it can raise ethical questions and concerns, such as bias, fairness, transparency, accountability, and human dignity. This can happen if you use GenAI models that are not aligned with your values and principles, if you use GenAI for inappropriate or harmful purposes, or if you fail to consider the impact of your GenAI decisions on others. Here are some examples of how this can go wrong:

  • A company used a GenAI model to screen job applicants, but the model was biased against certain groups based on their gender, race, or age. The company faced discrimination lawsuits and public backlash.
  • A journalist used a GenAI model to write a news article, but the model fabricated some facts and quotes. The journalist violated the journalistic ethics and lost credibility.
  • A teacher used a GenAI model to grade students’ assignments, but the model was not transparent about how it calculated the scores. The teacher failed to provide feedback and justification to the students.

To avoid these ethical dilemmas, you need to follow some guidelines when using GenAI:

  • Only use GenAI models that are fair, unbiased, and explainable, and check their ethical standards and ratings.
  • Only use GenAI for legitimate and beneficial purposes and respect the rights and interests of others.
  • Always take responsibility for your GenAI actions and outcomes and be ready to correct any errors or harms.
  • Involve human oversight and input in your GenAI processes and decisions and respect the human dignity and autonomy.

EU legislation on AI

In April 2021, the European Commission proposed a new regulation on artificial intelligence, which aims to ensure that AI is trustworthy, human-centric, and respectful of the fundamental rights and values of the EU. The regulation is expected to be adopted by 2025, and it will have significant implications for businesses that use or develop AI, including GenAI.

The regulation defines four categories of AI applications, based on their level of risk and impact:

  • Unacceptable: AI applications that pose a clear threat to the safety, livelihoods, and rights of people, such as social scoring, mass surveillance, or manipulation. These applications will be banned in the EU.
  • High-risk: AI applications that have a high potential to cause harm or discrimination, such as biometric identification, critical infrastructure, or education and employment. These applications will be subject to strict requirements, such as data quality, human oversight, transparency, and accuracy.
  • Limited-risk: AI applications that have a low potential to cause harm or discrimination but may affect the emotional or psychological well-being of users, such as chatbots, video games, or online advertising. These applications will be subject to transparency obligations, such as informing users that they are interacting with an AI system.
  • Minimal-risk: AI applications that have a negligible potential to cause harm or discrimination, such as spam filters, email assistants, or smart appliances. These applications will be subject to voluntary codes of conduct and best practices.

To comply with the EU legislation on AI, you need to be aware of the category and the corresponding obligations of your GenAI applications and follow the rules and standards that apply to them. You also need to monitor the developments and updates of the regulation and be ready to adapt to the changes.

Conclusion

GenAI is a valuable and versatile tool that can help you improve your work performance and productivity.

But it also comes with some security and ethical dilemmas that you need to be aware of and address. By following the best practices and guidelines that we discussed in this blog post, you can use GenAI responsibly and securely, and avoid the common pitfalls and mistakes. You can also stay ahead of the curve and prepare for the upcoming EU legislation on AI, and ensure that your GenAI applications are trustworthy, human-centric, and respectful of the fundamental rights and values of the EU.