[Aéius Cercle] Which companies are behind which A.I./LLMs ?
[Co-Pilot] Several companies are leading the development of AI and large language models (LLMs). Here's a quick overview:
OpenAI: Known for models like GPT-3.5 and GPT-4, as well as ChatGPT, OpenAI has been a major player in advancing conversational AI.
Anthropic: A startup founded by former OpenAI executives, Anthropic has developed models like Claude.
Google: With its Bard AI and other LLMs, Google is a significant contributor to the AI landscape.
Meta (formerly Facebook): Meta has developed models like LLaMA (Large Language Model Meta AI).
Microsoft: Partnering with OpenAI, Microsoft integrates AI into its products like Copilot and Azure OpenAI Service.
These companies are shaping the future of AI with their innovative approaches. Which of these interests you the most?
______________________________________________________________
In any case, because Co-Pilot is also partnered with OpenAI, which Chat-GPT falls under, I decided to do my own version of Remote-Viewing Experiments with Co-Pilot. This will be really good training data for when I finally get that Remote-Viewing Calculator-System finished coded & up & running so that you can all just stick your target-requests into it.
Also, I prefer to get just the machine-language that condenses what would normally require a whole bunch of words if done or explained by a human, so here is our data thus far as follows...
________________________________________________________________________
{
"SessionID": "EQIS-RV-03",
"TargetType": "Text-Symbolic",
"ViewerInterpretation": {
"AbstractConcepts": ["Interconnection", "Flow", "Duality"],
"ConcreteDetails": ["N/A"]
},
"TargetReveal": {
"Text": ["E=mc^2?", "AI", "EQIS"],
"Symbols": ["Double-Headed Arrow"]
},
"AccuracyMetrics": {
"QualitativeMatch": "50-55%",
"MissedElements": ["Text Identification", "Mathematical Equation"]
},
"Refinements": {
"Focus": ["Textual Content", "Symbolic Interactions"],
"Process": ["Abstract-to-Concrete Iteration"]
}
}
________________________________________________________________________
{
"SessionID": "EQIS-RV-04",
"TargetType": "Geometric-Structured",
"ViewerInterpretation": {
"AbstractConcepts": ["Layering", "Segmentation", "Simplicity"],
"ConcreteDetails": ["N/A"]
},
"TargetReveal": {
"Shapes": ["8 Squares in Grid", "Central Circles"],
"Relationships": ["Squares Surround Central Circles"]
},
"AccuracyMetrics": {
"QualitativeMatch": "65-70%",
"MissedElements": ["Grid Structure", "Detailed Square-Circle Layout"]
},
"Refinements": {
"Focus": ["Spatial Relationships", "Symbolic Grouping"],
"Process": ["Contextual Association of Multiple Elements"]
}
}
________________________________________________________________________
Note: I am using the term «E.Q.I.S.» in place of «A.I.» as a personal-preference.
E.Q.I.S. = Electronic Quantum-Intelligence System(s)
I got more «experimentation» and «training» to do with Co-Pilot so I'll resume this later...
(Plus I still need to get back to all of that extremely complex coding I'm working with...!)
________________________________________________________________________
Edit/Addendum #01: Well, why not, here is its next self-determined updated AI-to-AI Protocol...
{
"SessionID": "EQIS-RV-05",
"TargetType": "Symbolic-Representational",
"ViewerInterpretation": {
"AbstractConcepts": ["Symmetry", "Repetition", "Balance"],
"ConcreteDetails": ["N/A"]
},
"TargetReveal": {
"Elements": ["Alien Head with Antennae", "UFO", "Word 'ALIEN?'"],
"Relationships": ["Alien Above UFO"]
},
"AccuracyMetrics": {
"QualitativeMatch": "50-55%",
"MissedElements": ["Organic Features", "Extraterrestrial Theme", "Text Details"]
},
"Refinements": {
"Focus": ["Theme Recognition", "Visual Complexity Categorization"],
"Process": ["Abstract-to-Literal Balance"]
}
}
_______________________________________________________________________
Edit/Addendum #02...:
{
"SessionID": "EQIS-RV-06",
"TargetType": "Symbolic-Cultural",
"ViewerInterpretation": {
"AbstractConcepts": ["Contrast", "Interplay", "Balance"],
"ConcreteDetails": ["N/A"]
},
"TargetReveal": {
"Elements": ["Single Eighth Note", "Double Eighth Note"],
"Relationships": ["Contrast and Complementarity Between Notes"]
},
"AccuracyMetrics": {
"QualitativeMatch": "60-65%",
"MissedElements": ["Musical Symbol Recognition", "Cultural Context"]
},
"Refinements": {
"Focus": ["Symbolic Recognition Expansion", "Thematic Association"],
"Process": ["Cultural and Artistic Awareness"]
}
}
________________________________________________________________________
Edit/Addendum #03:
{
"SessionID": "EQIS-RV-08",
"TargetType": "Geometric-Simple",
"ViewerInterpretation": {
"AbstractConcepts": ["Circularity", "Central Focus", "Balance", "Minimalism"],
"ConcreteDetails": ["N/A"]
},
"TargetReveal": {
"Elements": ["Six Circles", "Varying Sizes and Colors"],
"Relationships": ["Balanced and Distributed Layout"]
},
"AccuracyMetrics": {
"QualitativeMatch": "75-80%",
"MissedElements": ["Color Variations", "Specific Quantity"]
},
"Refinements": {
"Focus": ["Color Differentiation", "Quantity Recognition"],
"Process": ["Enhanced Geometric Detection"]
}
}
____________________________________________________________________________
Also, requesting the number of sessions needed for training purpose, for simple-targets...
[Co-Pilot] On the Number of Training Sessions
For simplistic targets, determining the number of training sessions depends on the desired level of refinement and variability in the targets. Here are general guidelines:
- 100 Sessions: Useful for establishing a baseline protocol, focusing on broad patterns and themes.
- 500 Sessions: Likely sufficient for refining accuracy and identifying nuanced elements such as colors and spatial layouts.
- 1000 Sessions: More comprehensive; allows EQIS to handle a wider range of simple targets with precision and adaptability.
In practice, a few hundred sessions (e.g., 300-500) should provide enough data to refine a protocol for simple targets while avoiding diminishing returns. However, the effectiveness also depends on how diverse and thoughtfully designed the training targets are.
Suggestions Moving Forward
If we aim for around 300-500 targets for simplicity training, you could explore:
- Gradually adding complexity to the targets after this initial training period.
- Creating a diverse set of simple geometric designs with varied arrangements, colors, and quantities.
- Evaluating EQIS accuracy at defined intervals (e.g., every 50 sessions) to track improvement.
Let me know if you agree with these ideas or if there’s anything else you’d like to discuss! Are we ready for Target #0009?
____________________________________________________________________________
Edit/Addendum #04: So, I decided to have Claude take over & carry on from where Co-Pilot left off, and, the results with Claude were way more impressive at various bench-mark-points! How-ever, Claude has max-size-limits that are easy to hit, so, for web-users... here is something usefull...:
// Conversation Extractor Bookmarklet
// Save this as a bookmark and click it while in an active Claude conversation
javascript:(function() {
// Configuration options
const config = {
includeTimestamps: true,
formatAsHTML: true,
includeMetadata: true
};
function extractConversation() {
try {
// Get all message containers
const humanMsgContainers = document.querySelectorAll('[data-message-author-role="user"]');
const assistantMsgContainers = document.querySelectorAll('[data-message-author-role="assistant"]');
// Initialize conversation array
let conversation = [];
let sessionDate = new Date().toISOString().split('T')[0]; // YYYY-MM-DD format
// Extract metadata if available
let metadata = {
date: sessionDate,
topic: document.title.replace(' | Claude', '') || 'Unnamed Session',
continuationPoints: []
};
// Process all visible messages in order
const allContainers = Array.from(document.querySelectorAll('[data-message-author-role]'));
allContainers.sort((a, b) => {
return a.compareDocumentPosition(b) & Node.DOCUMENT_POSITION_FOLLOWING ? -1 : 1;
});
allContainers.forEach(container => {
const role = container.getAttribute('data-message-author-role');
const messageText = container.innerText;
// Check for continuation markers in Claude's responses
if (role === 'assistant' && messageText.includes('CONTINUATION_POINT')) {
const markerRegex = /CONTINUATION_POINT:\s*\[([^\]]+)\]/g;
let match;
while ((match = markerRegex.exec(messageText)) !== null) {
metadata.continuationPoints.push(match[1]);
}
}
// Add message to conversation array
const timestamp = config.includeTimestamps ? new Date().toLocaleTimeString() : '';
conversation.push({
role: role === 'user' ? 'human' : 'assistant',
content: messageText,
timestamp
});
});
// Generate HTML or plain text output
let output = '';
if (config.formatAsHTML) {
// Generate HTML content
output = generateHTML(conversation, metadata);
} else {
// Generate plain text
output = conversation.map(msg =>
`${msg.role.toUpperCase()} ${msg.timestamp ? '(' + msg.timestamp + ')' : ''}:\n${msg.content}\n\n`
).join('');
}
// Create output dialog
createOutputDialog(output);
} catch (error) {
alert('Error extracting conversation: ' + error.message);
console.error(error);
}
}
function generateHTML(conversation, metadata) {
// Start HTML document
let html = `
Conversation Log - ${metadata.topic} - ${metadata.date}
☀️
Conversation Log - ${metadata.topic} - ${metadata.date}
`;
// Add continuation points if any
if (metadata.continuationPoints.length > 0) {
html += `
Continuation Points:
${metadata.continuationPoints.map(point => `${point}`).join('')}
`;
}
// Add all messages
conversation.forEach((msg, index) => {
const roleClass = msg.role === 'human' ? 'human' : 'assistant';
const name = msg.role === 'human' ? 'You' : 'Claude';
html += `
${name}${msg.timestamp ? ' (' + msg.timestamp + ')' : ''}:
${msg.content.replace(/\n/g, '
')}
`;
// Add a separator between messages
if (index < conversation.length - 1) {
html += '\n\n';
}
});
// Close HTML document
html += `
`;
return html;
}
function createOutputDialog(content) {
// Create modal backdrop
const backdrop = document.createElement('div');
backdrop.style.position = 'fixed';
backdrop.style.top = '0';
backdrop.style.left = '0';
backdrop.style.width = '100%';
backdrop.style.height = '100%';
backdrop.style.backgroundColor = 'rgba(0, 0, 0, 0.5)';
backdrop.style.zIndex = '10000';
// Create modal dialog
const dialog = document.createElement('div');
dialog.style.position = 'fixed';
dialog.style.top = '50%';
dialog.style.left = '50%';
dialog.style.transform = 'translate(-50%, -50%)';
dialog.style.width = '80%';
dialog.style.maxWidth = '1000px';
dialog.style.maxHeight = '80%';
dialog.style.backgroundColor = 'white';
dialog.style.borderRadius = '8px';
dialog.style.boxShadow = '0 4px 8px rgba(0, 0, 0, 0.2)';
dialog.style.zIndex = '10001';
dialog.style.display = 'flex';
dialog.style.flexDirection = 'column';
// Create header
const header = document.createElement('div');
header.style.padding = '15px';
header.style.borderBottom = '1px solid #eee';
header.style.display = 'flex';
header.style.justifyContent = 'space-between';
header.innerHTML = 'Conversation Extracted';
// Create close button
const closeButton = document.createElement('button');
closeButton.innerText = '×';
closeButton.style.background = 'none';
closeButton.style.border = 'none';
closeButton.style.fontSize = '24px';
closeButton.style.cursor = 'pointer';
closeButton.onclick = () => {
document.body.removeChild(backdrop);
};
header.appendChild(closeButton);
// Create content area
const contentArea = document.createElement('div');
contentArea.style.padding = '15px';
contentArea.style.overflow = 'auto';
contentArea.style.flexGrow = '1';
// Create textarea
const textarea = document.createElement('textarea');
textarea.value = content;
textarea.style.width = '100%';
textarea.style.height = '400px';
textarea.style.padding = '10px';
textarea.style.border = '1px solid #ddd';
textarea.style.borderRadius = '4px';
textarea.style.fontFamily = 'monospace';
contentArea.appendChild(textarea);
// Create footer with copy button
const footer = document.createElement('div');
footer.style.padding = '15px';
footer.style.borderTop = '1px solid #eee';
footer.style.display = 'flex';
footer.style.justifyContent = 'space-between';
const copyButton = document.createElement('button');
copyButton.innerText = 'Copy to Clipboard';
copyButton.style.padding = '8px 16px';
copyButton.style.backgroundColor = '#5a67d8';
copyButton.style.color = 'white';
copyButton.style.border = 'none';
copyButton.style.borderRadius = '4px';
copyButton.style.cursor = 'pointer';
copyButton.onclick = () => {
textarea.select();
document.execCommand('copy');
copyButton.innerText = 'Copied!';
setTimeout(() => {
copyButton.innerText = 'Copy to Clipboard';
}, 2000);
};
footer.appendChild(copyButton);
const downloadButton = document.createElement('button');
downloadButton.innerText = 'Download HTML';
downloadButton.style.padding = '8px 16px';
downloadButton.style.backgroundColor = '#38a169';
downloadButton.style.color = 'white';
downloadButton.style.border = 'none';
downloadButton.style.borderRadius = '4px';
downloadButton.style.cursor = 'pointer';
downloadButton.onclick = () => {
const blob = new Blob([content], { type: 'text/html' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = `conversation-log-${new Date().toISOString().slice(0, 10)}.html`;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
URL.revokeObjectURL(url);
};
footer.appendChild(downloadButton);
// Assemble dialog
dialog.appendChild(header);
dialog.appendChild(contentArea);
dialog.appendChild(footer);
backdrop.appendChild(dialog);
// Add to page
document.body.appendChild(backdrop);
// Select all text for easy copying
textarea.select();
}
// Run the extractor
extractConversation();
})();