easy-accordion-free
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114zoho-flow
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114wordpress-seo
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/mother99/jacksonholdingcompany.com/wp-includes/functions.php on line 6114Nearly 40 years ago, Cisco helped build the Internet. Today, much of the Internet is powered by Cisco technology\u2014a testament to the trust customers, partners, and stakeholders place in Cisco to s\u2026 Read more on Cisco Blogs<\/a><\/p>\n \u200b<\/p>\n Nearly 40 years ago, Cisco helped build the Internet. Today, much of the Internet is powered by Cisco technology\u2014a testament to the trust customers, partners, and stakeholders place in Cisco to securely connect everything to make anything possible. This trust is not something we take lightly. And, when it comes to AI, we know that trust is on the line.<\/p>\n In my role as Cisco\u2019s chief legal officer, I oversee our privacy organization. In our most recent Consumer Privacy Survey<\/a>, polling 2,600+ respondents across 12 geographies, consumers shared both their optimism for the power of AI in improving their lives, but also concern about the business use of AI today.<\/p>\n I wasn\u2019t surprised when I read these results; they reflect my conversations with employees, customers, partners, policy makers, and industry peers about this remarkable moment in time. The world is watching with anticipation to see if companies can harness the promise and potential of generative AI in a responsible way.<\/p>\n For Cisco, responsible business practices are core to who we are. \u00a0We agree AI must be safe and secure. That\u2019s why we were encouraged to see the call for \u201crobust, reliable, repeatable, and standardized evaluations of AI systems\u201d in President Biden\u2019s executive order on October 30. At Cisco, impact assessments have long been an important tool as we work to protect and preserve customer trust.<\/p>\n AI is not new for Cisco. We\u2019ve been incorporating predictive AI across our connected portfolio for over a decade. This encompasses a wide range of use cases, such as better visibility and anomaly detection in networking, threat predictions in security, advanced insights in collaboration, statistical modeling and baselining in observability, and AI powered TAC support in customer experience.<\/p>\n At its core, AI is about data. And if you\u2019re using data, privacy is paramount.<\/p>\n In 2015, we created a dedicated privacy team<\/a> to embed privacy by design as a core component of our development methodologies. This team is responsible for conducting privacy impact assessments (PIA) as part of the Cisco Secure Development Lifecycle<\/a>. These PIAs are a mandatory step in our product development lifecycle and our IT and business processes. Unless a product is reviewed through a PIA, this product will not be approved for launch. Similarly, an application will not be approved for deployment in our enterprise IT environment unless it has gone through a PIA. And, after completing a Product PIA, we create a public-facing Privacy Data Sheet<\/a> to provide transparency to customers and users about product-specific personal data practices.<\/p>\n As the use of AI became more pervasive, and the implications more novel, it became clear that we needed to build upon our foundation of privacy to develop a program to match the specific risks and opportunities associated with this new technology.<\/p>\n In 2018, in accordance with our Human Rights policy, we published our commitment to proactively respect human rights in the design, development, and use of AI. Given the pace at which AI was developing, and the many unknown impacts\u2014both positive and negative\u2014on individuals and communities around the world, it was important to outline our approach to issues of safety, trustworthiness, transparency, fairness, ethics, and equity.<\/p>\n <\/a>We formalized this commitment in 2022 with Cisco\u2019s Responsible AI Principles<\/a>,\u00a0 documenting in more detail our position on AI. We also published our Responsible AI Framework<\/a>, to operationalize our approach. Cisco\u2019s Responsible AI Framework<\/a> aligns to the NIST AI Risk Management Framework<\/a> and sets the foundation for our Responsible AI (RAI)<\/strong><\/em> assessment process.<\/p>\n We use the assessment in two instances, either when our engineering teams are developing a product or feature powered by AI, or when Cisco engages a third-party vendor to provide AI tools or services for our own, internal operations.<\/p>\n Through the RAI assessment process, modeled on Cisco\u2019s PIA program and developed by a cross-functional team of Cisco subject matter experts, our trained assessors gather information to surface and mitigate risks associated with the intended \u2013 and importantly \u2013 the unintended use cases for each submission. These assessments look at various aspects of AI and the product development, including the model, training data, fine tuning, prompts, privacy practices, and testing methodologies. The ultimate goal is to identify, understand and mitigate any issues related to Cisco\u2019s RAI Principles<\/a> \u2013 transparency, fairness, accountability, reliability, security and privacy.<\/p>\n And, just as we\u2019ve adapted and evolved our approach to privacy over the years in alignment with the changing technology landscape, we know we will need to do the same for Responsible AI. The novel use cases for, and capabilities of, AI are creating considerations almost daily. Indeed, we already have adapted our RAI assessments to reflect emerging standards, regulations and innovations. And, in many ways, we recognize this is just the beginning. While that requires a certain level of humility and readiness to adapt as we continue to learn, we are steadfast in our position of keeping privacy \u2013 and ultimately, trust \u2013 at the core of our approach.<\/p>\n \u00a0\u00a0AI is not new for Cisco. We\u2019ve been incorporating predictive AI across our connected portfolio for over a decade. At its core, AI is about data. And if you\u2019re using data, privacy is paramount.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>","protected":false},"excerpt":{"rendered":" <\/p>\n Nearly 40 years ago, Cisco helped build the Internet. Today, much of the Internet is powered by Cisco technology\u2014a testament to the trust customers, partners, and stakeholders place in Cisco to s\u2026 Read more on Cisco Blogs<\/a><\/p>\n \u200b<\/p>\n Nearly 40 years ago, Cisco helped build the Internet. Today, much of the Internet is powered by Cisco technology\u2014a testament to the trust customers, partners, and stakeholders place in Cisco to securely connect everything to make anything possible. This trust is not something we take lightly. And, when it comes to AI, we know that trust is on the line.<\/p>\n In my role as Cisco\u2019s chief legal officer, I oversee our privacy organization. In our most recent Consumer Privacy Survey<\/a>, polling 2,600+ respondents across 12 geographies, consumers shared both their optimism for the power of AI in improving their lives, but also concern about the business use of AI today.<\/p>\n I wasn\u2019t surprised when I read these results; they reflect my conversations with employees, customers, partners, policy makers, and industry peers about this remarkable moment in time. The world is watching with anticipation to see if companies can harness the promise and potential of generative AI in a responsible way.<\/p>\n For Cisco, responsible business practices are core to who we are. \u00a0We agree AI must be safe and secure. That\u2019s why we were encouraged to see the call for \u201crobust, reliable, repeatable, and standardized evaluations of AI systems\u201d in President Biden\u2019s executive order on October 30. At Cisco, impact assessments have long been an important tool as we work to protect and preserve customer trust.<\/p>\n AI is not new for Cisco. We\u2019ve been incorporating predictive AI across our connected portfolio for over a decade. This encompasses a wide range of use cases, such as better visibility and anomaly detection in networking, threat predictions in security, advanced insights in collaboration, statistical modeling and baselining in observability, and AI powered TAC support in customer experience.<\/p>\n At its core, AI is about data. And if you\u2019re using data, privacy is paramount.<\/p>\n In 2015, we created a dedicated privacy team<\/a> to embed privacy by design as a core component of our development methodologies. This team is responsible for conducting privacy impact assessments (PIA) as part of the Cisco Secure Development Lifecycle<\/a>. These PIAs are a mandatory step in our product development lifecycle and our IT and business processes. Unless a product is reviewed through a PIA, this product will not be approved for launch. Similarly, an application will not be approved for deployment in our enterprise IT environment unless it has gone through a PIA. And, after completing a Product PIA, we create a public-facing Privacy Data Sheet<\/a> to provide transparency to customers and users about product-specific personal data practices.<\/p>\n As the use of AI became more pervasive, and the implications more novel, it became clear that we needed to build upon our foundation of privacy to develop a program to match the specific risks and opportunities associated with this new technology.<\/p>\n In 2018, in accordance with our Human Rights policy, we published our commitment to proactively respect human rights in the design, development, and use of AI. Given the pace at which AI was developing, and the many unknown impacts\u2014both positive and negative\u2014on individuals and communities around the world, it was important to outline our approach to issues of safety, trustworthiness, transparency, fairness, ethics, and equity.<\/p>\n <\/a>We formalized this commitment in 2022 with Cisco\u2019s Responsible AI Principles<\/a>,\u00a0 documenting in more detail our position on AI. We also published our Responsible AI Framework<\/a>, to operationalize our approach. Cisco\u2019s Responsible AI Framework<\/a> aligns to the NIST AI Risk Management Framework<\/a> and sets the foundation for our Responsible AI (RAI)<\/strong><\/em> assessment process.<\/p>\n We use the assessment in two instances, either when our engineering teams are developing a product or feature powered by AI, or when Cisco engages a third-party vendor to provide AI tools or services for our own, internal operations.<\/p>\n Through the RAI assessment process, modeled on Cisco\u2019s PIA program and developed by a cross-functional team of Cisco subject matter experts, our trained assessors gather information to surface and mitigate risks associated with the intended \u2013 and importantly \u2013 the unintended use cases for each submission. These assessments look at various aspects of AI and the product development, including the model, training data, fine tuning, prompts, privacy practices, and testing methodologies. The ultimate goal is to identify, understand and mitigate any issues related to Cisco\u2019s RAI Principles<\/a> \u2013 transparency, fairness, accountability, reliability, security and privacy.<\/p>\n And, just as we\u2019ve adapted and evolved our approach to privacy over the years in alignment with the changing technology landscape, we know we will need to do the same for Responsible AI. The novel use cases for, and capabilities of, AI are creating considerations almost daily. Indeed, we already have adapted our RAI assessments to reflect emerging standards, regulations and innovations. And, in many ways, we recognize this is just the beginning. While that requires a certain level of humility and readiness to adapt as we continue to learn, we are steadfast in our position of keeping privacy \u2013 and ultimately, trust \u2013 at the core of our approach.<\/p>\n \u00a0\u00a0AI is not new for Cisco. We\u2019ve been incorporating predictive AI across our connected portfolio for over a decade. At its core, AI is about data. And if you\u2019re using data, privacy is paramount.\u00a0\u00a0Read More<\/a>\u00a0Cisco Blogs\u00a0<\/p>\n <\/p>\n","protected":false},"author":0,"featured_media":1595,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[],"class_list":["post-1594","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cisco-learning"],"yoast_head":"\nImpact assessments at Cisco <\/strong><\/h2>\n
Responsible AI at Cisco<\/strong><\/h2>\n
Read the Cisco Consumer Privacy Study<\/a><\/h2>\n
Impact assessments at Cisco <\/strong><\/h2>\n
Responsible AI at Cisco<\/strong><\/h2>\n
Read the Cisco Consumer Privacy Study<\/a><\/h2>\n